00:00:00.001 Started by upstream project "autotest-per-patch" build number 126217 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.082 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.082 The recommended git tool is: git 00:00:00.083 using credential 00000000-0000-0000-0000-000000000002 00:00:00.084 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.124 Fetching changes from the remote Git repository 00:00:00.126 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.167 Using shallow fetch with depth 1 00:00:00.167 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.167 > git --version # timeout=10 00:00:00.224 > git --version # 'git version 2.39.2' 00:00:00.224 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.245 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.245 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.162 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.175 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.187 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:05.187 > git config core.sparsecheckout # timeout=10 00:00:05.198 > git read-tree -mu HEAD # timeout=10 00:00:05.217 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:05.238 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:05.239 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:05.345 [Pipeline] Start of Pipeline 00:00:05.359 [Pipeline] library 00:00:05.361 Loading library shm_lib@master 00:00:05.361 Library shm_lib@master is cached. Copying from home. 00:00:05.375 [Pipeline] node 00:00:05.381 Running on WFP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.383 [Pipeline] { 00:00:05.392 [Pipeline] catchError 00:00:05.393 [Pipeline] { 00:00:05.404 [Pipeline] wrap 00:00:05.411 [Pipeline] { 00:00:05.416 [Pipeline] stage 00:00:05.418 [Pipeline] { (Prologue) 00:00:05.588 [Pipeline] sh 00:00:05.872 + logger -p user.info -t JENKINS-CI 00:00:05.891 [Pipeline] echo 00:00:05.893 Node: WFP6 00:00:05.902 [Pipeline] sh 00:00:06.198 [Pipeline] setCustomBuildProperty 00:00:06.208 [Pipeline] echo 00:00:06.209 Cleanup processes 00:00:06.214 [Pipeline] sh 00:00:06.494 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.494 3623280 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.509 [Pipeline] sh 00:00:06.790 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.790 ++ grep -v 'sudo pgrep' 00:00:06.790 ++ awk '{print $1}' 00:00:06.790 + sudo kill -9 00:00:06.790 + true 00:00:06.804 [Pipeline] cleanWs 00:00:06.812 [WS-CLEANUP] Deleting project workspace... 00:00:06.812 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.819 [WS-CLEANUP] done 00:00:06.823 [Pipeline] setCustomBuildProperty 00:00:06.837 [Pipeline] sh 00:00:07.119 + sudo git config --global --replace-all safe.directory '*' 00:00:07.186 [Pipeline] httpRequest 00:00:07.227 [Pipeline] echo 00:00:07.228 Sorcerer 10.211.164.101 is alive 00:00:07.236 [Pipeline] httpRequest 00:00:07.241 HttpMethod: GET 00:00:07.242 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.243 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.262 Response Code: HTTP/1.1 200 OK 00:00:07.262 Success: Status code 200 is in the accepted range: 200,404 00:00:07.263 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:13.096 [Pipeline] sh 00:00:13.376 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:13.392 [Pipeline] httpRequest 00:00:13.430 [Pipeline] echo 00:00:13.432 Sorcerer 10.211.164.101 is alive 00:00:13.442 [Pipeline] httpRequest 00:00:13.446 HttpMethod: GET 00:00:13.447 URL: http://10.211.164.101/packages/spdk_bdeef1ed399c7bd878158b1caeed69f1d167a305.tar.gz 00:00:13.448 Sending request to url: http://10.211.164.101/packages/spdk_bdeef1ed399c7bd878158b1caeed69f1d167a305.tar.gz 00:00:13.466 Response Code: HTTP/1.1 200 OK 00:00:13.467 Success: Status code 200 is in the accepted range: 200,404 00:00:13.467 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_bdeef1ed399c7bd878158b1caeed69f1d167a305.tar.gz 00:00:45.910 [Pipeline] sh 00:00:46.194 + tar --no-same-owner -xf spdk_bdeef1ed399c7bd878158b1caeed69f1d167a305.tar.gz 00:00:48.732 [Pipeline] sh 00:00:49.016 + git -C spdk log --oneline -n5 00:00:49.016 bdeef1ed3 nvmf: add helper function to get a transport poll group 00:00:49.016 2728651ee accel: adjust task per ch define name 00:00:49.016 e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:00:49.016 3b4b1d00c libvfio-user: bump MAX_DMA_REGIONS 00:00:49.016 32a79de81 lib/event: add disable_cpumask_locks to spdk_app_opts 00:00:49.029 [Pipeline] } 00:00:49.046 [Pipeline] // stage 00:00:49.057 [Pipeline] stage 00:00:49.059 [Pipeline] { (Prepare) 00:00:49.079 [Pipeline] writeFile 00:00:49.096 [Pipeline] sh 00:00:49.377 + logger -p user.info -t JENKINS-CI 00:00:49.390 [Pipeline] sh 00:00:49.671 + logger -p user.info -t JENKINS-CI 00:00:49.683 [Pipeline] sh 00:00:49.963 + cat autorun-spdk.conf 00:00:49.963 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:49.963 SPDK_TEST_NVMF=1 00:00:49.963 SPDK_TEST_NVME_CLI=1 00:00:49.963 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:49.963 SPDK_TEST_NVMF_NICS=e810 00:00:49.963 SPDK_TEST_VFIOUSER=1 00:00:49.963 SPDK_RUN_UBSAN=1 00:00:49.963 NET_TYPE=phy 00:00:49.970 RUN_NIGHTLY=0 00:00:49.976 [Pipeline] readFile 00:00:50.004 [Pipeline] withEnv 00:00:50.006 [Pipeline] { 00:00:50.020 [Pipeline] sh 00:00:50.302 + set -ex 00:00:50.302 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:50.302 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:50.302 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.302 ++ SPDK_TEST_NVMF=1 00:00:50.302 ++ SPDK_TEST_NVME_CLI=1 00:00:50.302 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:50.302 ++ SPDK_TEST_NVMF_NICS=e810 00:00:50.302 ++ SPDK_TEST_VFIOUSER=1 00:00:50.302 ++ SPDK_RUN_UBSAN=1 00:00:50.302 ++ NET_TYPE=phy 00:00:50.302 ++ RUN_NIGHTLY=0 00:00:50.302 + case $SPDK_TEST_NVMF_NICS in 00:00:50.303 + DRIVERS=ice 00:00:50.303 + [[ tcp == \r\d\m\a ]] 00:00:50.303 + [[ -n ice ]] 00:00:50.303 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:50.303 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:56.867 rmmod: ERROR: Module irdma is not currently loaded 00:00:56.867 rmmod: ERROR: Module i40iw is not currently loaded 00:00:56.867 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:56.867 + true 00:00:56.867 + for D in $DRIVERS 00:00:56.867 + sudo modprobe ice 00:00:56.867 + exit 0 00:00:56.876 [Pipeline] } 00:00:56.895 [Pipeline] // withEnv 00:00:56.903 [Pipeline] } 00:00:56.921 [Pipeline] // stage 00:00:56.932 [Pipeline] catchError 00:00:56.934 [Pipeline] { 00:00:56.949 [Pipeline] timeout 00:00:56.950 Timeout set to expire in 50 min 00:00:56.951 [Pipeline] { 00:00:56.967 [Pipeline] stage 00:00:56.969 [Pipeline] { (Tests) 00:00:56.984 [Pipeline] sh 00:00:57.268 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:57.268 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:57.268 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:57.268 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:57.268 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:57.268 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:57.268 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:57.268 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:57.268 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:57.268 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:57.268 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:57.268 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:57.268 + source /etc/os-release 00:00:57.268 ++ NAME='Fedora Linux' 00:00:57.268 ++ VERSION='38 (Cloud Edition)' 00:00:57.268 ++ ID=fedora 00:00:57.268 ++ VERSION_ID=38 00:00:57.268 ++ VERSION_CODENAME= 00:00:57.268 ++ PLATFORM_ID=platform:f38 00:00:57.268 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:57.268 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:57.268 ++ LOGO=fedora-logo-icon 00:00:57.268 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:57.268 ++ HOME_URL=https://fedoraproject.org/ 00:00:57.268 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:57.268 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:57.268 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:57.268 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:57.268 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:57.268 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:57.268 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:57.268 ++ SUPPORT_END=2024-05-14 00:00:57.268 ++ VARIANT='Cloud Edition' 00:00:57.268 ++ VARIANT_ID=cloud 00:00:57.268 + uname -a 00:00:57.268 Linux spdk-wfp-06 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:57.268 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:59.813 Hugepages 00:00:59.813 node hugesize free / total 00:00:59.813 node0 1048576kB 0 / 0 00:00:59.813 node0 2048kB 0 / 0 00:00:59.813 node1 1048576kB 0 / 0 00:00:59.813 node1 2048kB 0 / 0 00:00:59.813 00:00:59.813 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:59.813 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:59.813 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:59.813 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:59.813 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:59.813 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:59.813 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:59.813 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:59.813 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:59.813 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:59.813 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:59.813 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:59.813 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:59.813 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:59.813 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:59.813 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:59.813 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:59.813 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:59.813 + rm -f /tmp/spdk-ld-path 00:00:59.813 + source autorun-spdk.conf 00:00:59.813 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.813 ++ SPDK_TEST_NVMF=1 00:00:59.813 ++ SPDK_TEST_NVME_CLI=1 00:00:59.813 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:59.813 ++ SPDK_TEST_NVMF_NICS=e810 00:00:59.813 ++ SPDK_TEST_VFIOUSER=1 00:00:59.813 ++ SPDK_RUN_UBSAN=1 00:00:59.813 ++ NET_TYPE=phy 00:00:59.813 ++ RUN_NIGHTLY=0 00:00:59.813 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:59.814 + [[ -n '' ]] 00:00:59.814 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:59.814 + for M in /var/spdk/build-*-manifest.txt 00:00:59.814 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:59.814 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:59.814 + for M in /var/spdk/build-*-manifest.txt 00:00:59.814 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:59.814 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:59.814 ++ uname 00:00:59.814 + [[ Linux == \L\i\n\u\x ]] 00:00:59.814 + sudo dmesg -T 00:00:59.814 + sudo dmesg --clear 00:00:59.814 + dmesg_pid=3624715 00:00:59.814 + [[ Fedora Linux == FreeBSD ]] 00:00:59.814 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:59.814 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:59.814 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:59.814 + [[ -x /usr/src/fio-static/fio ]] 00:00:59.814 + export FIO_BIN=/usr/src/fio-static/fio 00:00:59.814 + FIO_BIN=/usr/src/fio-static/fio 00:00:59.814 + sudo dmesg -Tw 00:00:59.814 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:59.814 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:59.814 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:59.814 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:59.814 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:59.814 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:59.814 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:59.814 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:59.814 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:00.130 Test configuration: 00:01:00.130 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.130 SPDK_TEST_NVMF=1 00:01:00.130 SPDK_TEST_NVME_CLI=1 00:01:00.130 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:00.130 SPDK_TEST_NVMF_NICS=e810 00:01:00.130 SPDK_TEST_VFIOUSER=1 00:01:00.130 SPDK_RUN_UBSAN=1 00:01:00.130 NET_TYPE=phy 00:01:00.130 RUN_NIGHTLY=0 18:15:45 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:00.130 18:15:45 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:00.130 18:15:45 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:00.130 18:15:45 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:00.130 18:15:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:00.130 18:15:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:00.130 18:15:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:00.130 18:15:45 -- paths/export.sh@5 -- $ export PATH 00:01:00.130 18:15:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:00.130 18:15:45 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:00.130 18:15:45 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:00.130 18:15:45 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721060145.XXXXXX 00:01:00.130 18:15:45 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721060145.raQf0f 00:01:00.130 18:15:45 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:00.130 18:15:45 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:00.130 18:15:45 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:00.130 18:15:45 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:00.131 18:15:45 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:00.131 18:15:45 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:00.131 18:15:45 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:00.131 18:15:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:00.131 18:15:45 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:00.131 18:15:45 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:00.131 18:15:45 -- pm/common@17 -- $ local monitor 00:01:00.131 18:15:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:00.131 18:15:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:00.131 18:15:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:00.131 18:15:45 -- pm/common@21 -- $ date +%s 00:01:00.131 18:15:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:00.131 18:15:45 -- pm/common@21 -- $ date +%s 00:01:00.131 18:15:45 -- pm/common@25 -- $ sleep 1 00:01:00.131 18:15:45 -- pm/common@21 -- $ date +%s 00:01:00.131 18:15:45 -- pm/common@21 -- $ date +%s 00:01:00.131 18:15:45 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721060145 00:01:00.131 18:15:45 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721060145 00:01:00.131 18:15:45 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721060145 00:01:00.131 18:15:45 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721060145 00:01:00.131 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721060145_collect-vmstat.pm.log 00:01:00.131 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721060145_collect-cpu-load.pm.log 00:01:00.131 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721060145_collect-cpu-temp.pm.log 00:01:00.131 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721060145_collect-bmc-pm.bmc.pm.log 00:01:01.065 18:15:46 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:01.065 18:15:46 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:01.065 18:15:46 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:01.065 18:15:46 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:01.065 18:15:46 -- spdk/autobuild.sh@16 -- $ date -u 00:01:01.065 Mon Jul 15 04:15:46 PM UTC 2024 00:01:01.065 18:15:46 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:01.065 v24.09-pre-207-gbdeef1ed3 00:01:01.065 18:15:46 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:01.065 18:15:46 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:01.065 18:15:46 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:01.065 18:15:46 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:01.065 18:15:46 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:01.065 18:15:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.065 ************************************ 00:01:01.065 START TEST ubsan 00:01:01.065 ************************************ 00:01:01.065 18:15:46 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:01.065 using ubsan 00:01:01.065 00:01:01.065 real 0m0.000s 00:01:01.065 user 0m0.000s 00:01:01.065 sys 0m0.000s 00:01:01.065 18:15:46 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:01.065 18:15:46 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:01.065 ************************************ 00:01:01.065 END TEST ubsan 00:01:01.065 ************************************ 00:01:01.065 18:15:46 -- common/autotest_common.sh@1142 -- $ return 0 00:01:01.066 18:15:46 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:01.066 18:15:46 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:01.066 18:15:46 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:01.066 18:15:46 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:01.066 18:15:46 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:01.066 18:15:46 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:01.066 18:15:46 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:01.066 18:15:46 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:01.066 18:15:46 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:01.323 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:01.323 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:01.580 Using 'verbs' RDMA provider 00:01:14.717 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:26.925 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:26.925 Creating mk/config.mk...done. 00:01:26.925 Creating mk/cc.flags.mk...done. 00:01:26.925 Type 'make' to build. 00:01:26.925 18:16:11 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:26.925 18:16:11 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:26.925 18:16:11 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:26.925 18:16:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:26.925 ************************************ 00:01:26.925 START TEST make 00:01:26.925 ************************************ 00:01:26.925 18:16:11 make -- common/autotest_common.sh@1123 -- $ make -j96 00:01:26.925 make[1]: Nothing to be done for 'all'. 00:01:27.862 The Meson build system 00:01:27.862 Version: 1.3.1 00:01:27.862 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:27.862 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:27.862 Build type: native build 00:01:27.862 Project name: libvfio-user 00:01:27.862 Project version: 0.0.1 00:01:27.862 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:27.862 C linker for the host machine: cc ld.bfd 2.39-16 00:01:27.862 Host machine cpu family: x86_64 00:01:27.862 Host machine cpu: x86_64 00:01:27.862 Run-time dependency threads found: YES 00:01:27.862 Library dl found: YES 00:01:27.862 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:27.862 Run-time dependency json-c found: YES 0.17 00:01:27.862 Run-time dependency cmocka found: YES 1.1.7 00:01:27.862 Program pytest-3 found: NO 00:01:27.862 Program flake8 found: NO 00:01:27.862 Program misspell-fixer found: NO 00:01:27.862 Program restructuredtext-lint found: NO 00:01:27.862 Program valgrind found: YES (/usr/bin/valgrind) 00:01:27.862 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:27.862 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:27.862 Compiler for C supports arguments -Wwrite-strings: YES 00:01:27.862 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:27.862 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:27.862 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:27.862 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:27.862 Build targets in project: 8 00:01:27.862 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:27.862 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:27.862 00:01:27.862 libvfio-user 0.0.1 00:01:27.862 00:01:27.862 User defined options 00:01:27.862 buildtype : debug 00:01:27.862 default_library: shared 00:01:27.862 libdir : /usr/local/lib 00:01:27.862 00:01:27.862 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:28.426 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:28.426 [1/37] Compiling C object samples/null.p/null.c.o 00:01:28.426 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:28.426 [3/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:28.426 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:28.426 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:28.426 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:28.426 [7/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:28.426 [8/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:28.426 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:28.426 [10/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:28.426 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:28.426 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:28.426 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:28.426 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:28.426 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:28.426 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:28.426 [17/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:28.426 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:28.426 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:28.426 [20/37] Compiling C object samples/server.p/server.c.o 00:01:28.426 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:28.426 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:28.426 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:28.426 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:28.426 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:28.426 [26/37] Compiling C object samples/client.p/client.c.o 00:01:28.426 [27/37] Linking target samples/client 00:01:28.683 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:28.683 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:28.683 [30/37] Linking target test/unit_tests 00:01:28.683 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:28.683 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:28.683 [33/37] Linking target samples/gpio-pci-idio-16 00:01:28.683 [34/37] Linking target samples/shadow_ioeventfd_server 00:01:28.683 [35/37] Linking target samples/server 00:01:28.683 [36/37] Linking target samples/lspci 00:01:28.683 [37/37] Linking target samples/null 00:01:28.683 INFO: autodetecting backend as ninja 00:01:28.683 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:28.941 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:29.198 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:29.198 ninja: no work to do. 00:01:34.460 The Meson build system 00:01:34.460 Version: 1.3.1 00:01:34.460 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:34.460 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:34.460 Build type: native build 00:01:34.460 Program cat found: YES (/usr/bin/cat) 00:01:34.460 Project name: DPDK 00:01:34.460 Project version: 24.03.0 00:01:34.460 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:34.460 C linker for the host machine: cc ld.bfd 2.39-16 00:01:34.460 Host machine cpu family: x86_64 00:01:34.460 Host machine cpu: x86_64 00:01:34.460 Message: ## Building in Developer Mode ## 00:01:34.460 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:34.460 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:34.460 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:34.460 Program python3 found: YES (/usr/bin/python3) 00:01:34.460 Program cat found: YES (/usr/bin/cat) 00:01:34.460 Compiler for C supports arguments -march=native: YES 00:01:34.460 Checking for size of "void *" : 8 00:01:34.460 Checking for size of "void *" : 8 (cached) 00:01:34.460 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:34.460 Library m found: YES 00:01:34.460 Library numa found: YES 00:01:34.460 Has header "numaif.h" : YES 00:01:34.460 Library fdt found: NO 00:01:34.460 Library execinfo found: NO 00:01:34.460 Has header "execinfo.h" : YES 00:01:34.460 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:34.460 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:34.460 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:34.460 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:34.460 Run-time dependency openssl found: YES 3.0.9 00:01:34.460 Run-time dependency libpcap found: YES 1.10.4 00:01:34.460 Has header "pcap.h" with dependency libpcap: YES 00:01:34.460 Compiler for C supports arguments -Wcast-qual: YES 00:01:34.460 Compiler for C supports arguments -Wdeprecated: YES 00:01:34.460 Compiler for C supports arguments -Wformat: YES 00:01:34.460 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:34.460 Compiler for C supports arguments -Wformat-security: NO 00:01:34.460 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:34.460 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:34.460 Compiler for C supports arguments -Wnested-externs: YES 00:01:34.460 Compiler for C supports arguments -Wold-style-definition: YES 00:01:34.460 Compiler for C supports arguments -Wpointer-arith: YES 00:01:34.460 Compiler for C supports arguments -Wsign-compare: YES 00:01:34.460 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:34.460 Compiler for C supports arguments -Wundef: YES 00:01:34.460 Compiler for C supports arguments -Wwrite-strings: YES 00:01:34.460 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:34.460 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:34.460 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:34.460 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:34.460 Program objdump found: YES (/usr/bin/objdump) 00:01:34.460 Compiler for C supports arguments -mavx512f: YES 00:01:34.460 Checking if "AVX512 checking" compiles: YES 00:01:34.460 Fetching value of define "__SSE4_2__" : 1 00:01:34.460 Fetching value of define "__AES__" : 1 00:01:34.460 Fetching value of define "__AVX__" : 1 00:01:34.460 Fetching value of define "__AVX2__" : 1 00:01:34.460 Fetching value of define "__AVX512BW__" : 1 00:01:34.460 Fetching value of define "__AVX512CD__" : 1 00:01:34.460 Fetching value of define "__AVX512DQ__" : 1 00:01:34.460 Fetching value of define "__AVX512F__" : 1 00:01:34.460 Fetching value of define "__AVX512VL__" : 1 00:01:34.460 Fetching value of define "__PCLMUL__" : 1 00:01:34.460 Fetching value of define "__RDRND__" : 1 00:01:34.460 Fetching value of define "__RDSEED__" : 1 00:01:34.460 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:34.460 Fetching value of define "__znver1__" : (undefined) 00:01:34.460 Fetching value of define "__znver2__" : (undefined) 00:01:34.460 Fetching value of define "__znver3__" : (undefined) 00:01:34.460 Fetching value of define "__znver4__" : (undefined) 00:01:34.460 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:34.460 Message: lib/log: Defining dependency "log" 00:01:34.460 Message: lib/kvargs: Defining dependency "kvargs" 00:01:34.460 Message: lib/telemetry: Defining dependency "telemetry" 00:01:34.460 Checking for function "getentropy" : NO 00:01:34.460 Message: lib/eal: Defining dependency "eal" 00:01:34.460 Message: lib/ring: Defining dependency "ring" 00:01:34.460 Message: lib/rcu: Defining dependency "rcu" 00:01:34.460 Message: lib/mempool: Defining dependency "mempool" 00:01:34.460 Message: lib/mbuf: Defining dependency "mbuf" 00:01:34.460 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:34.460 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:34.460 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:34.460 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:34.460 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:34.460 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:34.460 Compiler for C supports arguments -mpclmul: YES 00:01:34.460 Compiler for C supports arguments -maes: YES 00:01:34.460 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:34.460 Compiler for C supports arguments -mavx512bw: YES 00:01:34.460 Compiler for C supports arguments -mavx512dq: YES 00:01:34.460 Compiler for C supports arguments -mavx512vl: YES 00:01:34.460 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:34.460 Compiler for C supports arguments -mavx2: YES 00:01:34.460 Compiler for C supports arguments -mavx: YES 00:01:34.460 Message: lib/net: Defining dependency "net" 00:01:34.460 Message: lib/meter: Defining dependency "meter" 00:01:34.460 Message: lib/ethdev: Defining dependency "ethdev" 00:01:34.460 Message: lib/pci: Defining dependency "pci" 00:01:34.460 Message: lib/cmdline: Defining dependency "cmdline" 00:01:34.460 Message: lib/hash: Defining dependency "hash" 00:01:34.460 Message: lib/timer: Defining dependency "timer" 00:01:34.460 Message: lib/compressdev: Defining dependency "compressdev" 00:01:34.460 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:34.460 Message: lib/dmadev: Defining dependency "dmadev" 00:01:34.460 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:34.460 Message: lib/power: Defining dependency "power" 00:01:34.460 Message: lib/reorder: Defining dependency "reorder" 00:01:34.460 Message: lib/security: Defining dependency "security" 00:01:34.460 Has header "linux/userfaultfd.h" : YES 00:01:34.460 Has header "linux/vduse.h" : YES 00:01:34.460 Message: lib/vhost: Defining dependency "vhost" 00:01:34.460 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:34.460 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:34.460 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:34.460 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:34.460 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:34.460 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:34.460 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:34.460 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:34.460 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:34.460 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:34.460 Program doxygen found: YES (/usr/bin/doxygen) 00:01:34.460 Configuring doxy-api-html.conf using configuration 00:01:34.460 Configuring doxy-api-man.conf using configuration 00:01:34.460 Program mandb found: YES (/usr/bin/mandb) 00:01:34.460 Program sphinx-build found: NO 00:01:34.460 Configuring rte_build_config.h using configuration 00:01:34.460 Message: 00:01:34.460 ================= 00:01:34.460 Applications Enabled 00:01:34.460 ================= 00:01:34.460 00:01:34.460 apps: 00:01:34.460 00:01:34.460 00:01:34.460 Message: 00:01:34.460 ================= 00:01:34.460 Libraries Enabled 00:01:34.460 ================= 00:01:34.460 00:01:34.460 libs: 00:01:34.460 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:34.460 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:34.460 cryptodev, dmadev, power, reorder, security, vhost, 00:01:34.460 00:01:34.460 Message: 00:01:34.460 =============== 00:01:34.460 Drivers Enabled 00:01:34.460 =============== 00:01:34.460 00:01:34.460 common: 00:01:34.460 00:01:34.460 bus: 00:01:34.460 pci, vdev, 00:01:34.460 mempool: 00:01:34.460 ring, 00:01:34.460 dma: 00:01:34.460 00:01:34.460 net: 00:01:34.460 00:01:34.460 crypto: 00:01:34.460 00:01:34.460 compress: 00:01:34.460 00:01:34.460 vdpa: 00:01:34.460 00:01:34.460 00:01:34.460 Message: 00:01:34.460 ================= 00:01:34.460 Content Skipped 00:01:34.460 ================= 00:01:34.460 00:01:34.460 apps: 00:01:34.460 dumpcap: explicitly disabled via build config 00:01:34.460 graph: explicitly disabled via build config 00:01:34.460 pdump: explicitly disabled via build config 00:01:34.460 proc-info: explicitly disabled via build config 00:01:34.460 test-acl: explicitly disabled via build config 00:01:34.460 test-bbdev: explicitly disabled via build config 00:01:34.460 test-cmdline: explicitly disabled via build config 00:01:34.460 test-compress-perf: explicitly disabled via build config 00:01:34.460 test-crypto-perf: explicitly disabled via build config 00:01:34.460 test-dma-perf: explicitly disabled via build config 00:01:34.460 test-eventdev: explicitly disabled via build config 00:01:34.460 test-fib: explicitly disabled via build config 00:01:34.460 test-flow-perf: explicitly disabled via build config 00:01:34.460 test-gpudev: explicitly disabled via build config 00:01:34.460 test-mldev: explicitly disabled via build config 00:01:34.460 test-pipeline: explicitly disabled via build config 00:01:34.460 test-pmd: explicitly disabled via build config 00:01:34.460 test-regex: explicitly disabled via build config 00:01:34.460 test-sad: explicitly disabled via build config 00:01:34.460 test-security-perf: explicitly disabled via build config 00:01:34.460 00:01:34.460 libs: 00:01:34.461 argparse: explicitly disabled via build config 00:01:34.461 metrics: explicitly disabled via build config 00:01:34.461 acl: explicitly disabled via build config 00:01:34.461 bbdev: explicitly disabled via build config 00:01:34.461 bitratestats: explicitly disabled via build config 00:01:34.461 bpf: explicitly disabled via build config 00:01:34.461 cfgfile: explicitly disabled via build config 00:01:34.461 distributor: explicitly disabled via build config 00:01:34.461 efd: explicitly disabled via build config 00:01:34.461 eventdev: explicitly disabled via build config 00:01:34.461 dispatcher: explicitly disabled via build config 00:01:34.461 gpudev: explicitly disabled via build config 00:01:34.461 gro: explicitly disabled via build config 00:01:34.461 gso: explicitly disabled via build config 00:01:34.461 ip_frag: explicitly disabled via build config 00:01:34.461 jobstats: explicitly disabled via build config 00:01:34.461 latencystats: explicitly disabled via build config 00:01:34.461 lpm: explicitly disabled via build config 00:01:34.461 member: explicitly disabled via build config 00:01:34.461 pcapng: explicitly disabled via build config 00:01:34.461 rawdev: explicitly disabled via build config 00:01:34.461 regexdev: explicitly disabled via build config 00:01:34.461 mldev: explicitly disabled via build config 00:01:34.461 rib: explicitly disabled via build config 00:01:34.461 sched: explicitly disabled via build config 00:01:34.461 stack: explicitly disabled via build config 00:01:34.461 ipsec: explicitly disabled via build config 00:01:34.461 pdcp: explicitly disabled via build config 00:01:34.461 fib: explicitly disabled via build config 00:01:34.461 port: explicitly disabled via build config 00:01:34.461 pdump: explicitly disabled via build config 00:01:34.461 table: explicitly disabled via build config 00:01:34.461 pipeline: explicitly disabled via build config 00:01:34.461 graph: explicitly disabled via build config 00:01:34.461 node: explicitly disabled via build config 00:01:34.461 00:01:34.461 drivers: 00:01:34.461 common/cpt: not in enabled drivers build config 00:01:34.461 common/dpaax: not in enabled drivers build config 00:01:34.461 common/iavf: not in enabled drivers build config 00:01:34.461 common/idpf: not in enabled drivers build config 00:01:34.461 common/ionic: not in enabled drivers build config 00:01:34.461 common/mvep: not in enabled drivers build config 00:01:34.461 common/octeontx: not in enabled drivers build config 00:01:34.461 bus/auxiliary: not in enabled drivers build config 00:01:34.461 bus/cdx: not in enabled drivers build config 00:01:34.461 bus/dpaa: not in enabled drivers build config 00:01:34.461 bus/fslmc: not in enabled drivers build config 00:01:34.461 bus/ifpga: not in enabled drivers build config 00:01:34.461 bus/platform: not in enabled drivers build config 00:01:34.461 bus/uacce: not in enabled drivers build config 00:01:34.461 bus/vmbus: not in enabled drivers build config 00:01:34.461 common/cnxk: not in enabled drivers build config 00:01:34.461 common/mlx5: not in enabled drivers build config 00:01:34.461 common/nfp: not in enabled drivers build config 00:01:34.461 common/nitrox: not in enabled drivers build config 00:01:34.461 common/qat: not in enabled drivers build config 00:01:34.461 common/sfc_efx: not in enabled drivers build config 00:01:34.461 mempool/bucket: not in enabled drivers build config 00:01:34.461 mempool/cnxk: not in enabled drivers build config 00:01:34.461 mempool/dpaa: not in enabled drivers build config 00:01:34.461 mempool/dpaa2: not in enabled drivers build config 00:01:34.461 mempool/octeontx: not in enabled drivers build config 00:01:34.461 mempool/stack: not in enabled drivers build config 00:01:34.461 dma/cnxk: not in enabled drivers build config 00:01:34.461 dma/dpaa: not in enabled drivers build config 00:01:34.461 dma/dpaa2: not in enabled drivers build config 00:01:34.461 dma/hisilicon: not in enabled drivers build config 00:01:34.461 dma/idxd: not in enabled drivers build config 00:01:34.461 dma/ioat: not in enabled drivers build config 00:01:34.461 dma/skeleton: not in enabled drivers build config 00:01:34.461 net/af_packet: not in enabled drivers build config 00:01:34.461 net/af_xdp: not in enabled drivers build config 00:01:34.461 net/ark: not in enabled drivers build config 00:01:34.461 net/atlantic: not in enabled drivers build config 00:01:34.461 net/avp: not in enabled drivers build config 00:01:34.461 net/axgbe: not in enabled drivers build config 00:01:34.461 net/bnx2x: not in enabled drivers build config 00:01:34.461 net/bnxt: not in enabled drivers build config 00:01:34.461 net/bonding: not in enabled drivers build config 00:01:34.461 net/cnxk: not in enabled drivers build config 00:01:34.461 net/cpfl: not in enabled drivers build config 00:01:34.461 net/cxgbe: not in enabled drivers build config 00:01:34.461 net/dpaa: not in enabled drivers build config 00:01:34.461 net/dpaa2: not in enabled drivers build config 00:01:34.461 net/e1000: not in enabled drivers build config 00:01:34.461 net/ena: not in enabled drivers build config 00:01:34.461 net/enetc: not in enabled drivers build config 00:01:34.461 net/enetfec: not in enabled drivers build config 00:01:34.461 net/enic: not in enabled drivers build config 00:01:34.461 net/failsafe: not in enabled drivers build config 00:01:34.461 net/fm10k: not in enabled drivers build config 00:01:34.461 net/gve: not in enabled drivers build config 00:01:34.461 net/hinic: not in enabled drivers build config 00:01:34.461 net/hns3: not in enabled drivers build config 00:01:34.461 net/i40e: not in enabled drivers build config 00:01:34.461 net/iavf: not in enabled drivers build config 00:01:34.461 net/ice: not in enabled drivers build config 00:01:34.461 net/idpf: not in enabled drivers build config 00:01:34.461 net/igc: not in enabled drivers build config 00:01:34.461 net/ionic: not in enabled drivers build config 00:01:34.461 net/ipn3ke: not in enabled drivers build config 00:01:34.461 net/ixgbe: not in enabled drivers build config 00:01:34.461 net/mana: not in enabled drivers build config 00:01:34.461 net/memif: not in enabled drivers build config 00:01:34.461 net/mlx4: not in enabled drivers build config 00:01:34.461 net/mlx5: not in enabled drivers build config 00:01:34.461 net/mvneta: not in enabled drivers build config 00:01:34.461 net/mvpp2: not in enabled drivers build config 00:01:34.461 net/netvsc: not in enabled drivers build config 00:01:34.461 net/nfb: not in enabled drivers build config 00:01:34.461 net/nfp: not in enabled drivers build config 00:01:34.461 net/ngbe: not in enabled drivers build config 00:01:34.461 net/null: not in enabled drivers build config 00:01:34.461 net/octeontx: not in enabled drivers build config 00:01:34.461 net/octeon_ep: not in enabled drivers build config 00:01:34.461 net/pcap: not in enabled drivers build config 00:01:34.461 net/pfe: not in enabled drivers build config 00:01:34.461 net/qede: not in enabled drivers build config 00:01:34.461 net/ring: not in enabled drivers build config 00:01:34.461 net/sfc: not in enabled drivers build config 00:01:34.461 net/softnic: not in enabled drivers build config 00:01:34.461 net/tap: not in enabled drivers build config 00:01:34.461 net/thunderx: not in enabled drivers build config 00:01:34.461 net/txgbe: not in enabled drivers build config 00:01:34.461 net/vdev_netvsc: not in enabled drivers build config 00:01:34.461 net/vhost: not in enabled drivers build config 00:01:34.461 net/virtio: not in enabled drivers build config 00:01:34.461 net/vmxnet3: not in enabled drivers build config 00:01:34.461 raw/*: missing internal dependency, "rawdev" 00:01:34.461 crypto/armv8: not in enabled drivers build config 00:01:34.461 crypto/bcmfs: not in enabled drivers build config 00:01:34.461 crypto/caam_jr: not in enabled drivers build config 00:01:34.461 crypto/ccp: not in enabled drivers build config 00:01:34.461 crypto/cnxk: not in enabled drivers build config 00:01:34.461 crypto/dpaa_sec: not in enabled drivers build config 00:01:34.461 crypto/dpaa2_sec: not in enabled drivers build config 00:01:34.461 crypto/ipsec_mb: not in enabled drivers build config 00:01:34.461 crypto/mlx5: not in enabled drivers build config 00:01:34.461 crypto/mvsam: not in enabled drivers build config 00:01:34.461 crypto/nitrox: not in enabled drivers build config 00:01:34.461 crypto/null: not in enabled drivers build config 00:01:34.461 crypto/octeontx: not in enabled drivers build config 00:01:34.461 crypto/openssl: not in enabled drivers build config 00:01:34.461 crypto/scheduler: not in enabled drivers build config 00:01:34.461 crypto/uadk: not in enabled drivers build config 00:01:34.461 crypto/virtio: not in enabled drivers build config 00:01:34.461 compress/isal: not in enabled drivers build config 00:01:34.461 compress/mlx5: not in enabled drivers build config 00:01:34.461 compress/nitrox: not in enabled drivers build config 00:01:34.461 compress/octeontx: not in enabled drivers build config 00:01:34.461 compress/zlib: not in enabled drivers build config 00:01:34.461 regex/*: missing internal dependency, "regexdev" 00:01:34.461 ml/*: missing internal dependency, "mldev" 00:01:34.461 vdpa/ifc: not in enabled drivers build config 00:01:34.461 vdpa/mlx5: not in enabled drivers build config 00:01:34.461 vdpa/nfp: not in enabled drivers build config 00:01:34.461 vdpa/sfc: not in enabled drivers build config 00:01:34.461 event/*: missing internal dependency, "eventdev" 00:01:34.461 baseband/*: missing internal dependency, "bbdev" 00:01:34.461 gpu/*: missing internal dependency, "gpudev" 00:01:34.461 00:01:34.461 00:01:34.461 Build targets in project: 85 00:01:34.461 00:01:34.461 DPDK 24.03.0 00:01:34.461 00:01:34.461 User defined options 00:01:34.461 buildtype : debug 00:01:34.461 default_library : shared 00:01:34.461 libdir : lib 00:01:34.461 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:34.461 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:34.461 c_link_args : 00:01:34.461 cpu_instruction_set: native 00:01:34.461 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:34.461 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:34.461 enable_docs : false 00:01:34.461 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:34.461 enable_kmods : false 00:01:34.461 max_lcores : 128 00:01:34.461 tests : false 00:01:34.461 00:01:34.461 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:35.034 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:35.034 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:35.034 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:35.034 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:35.034 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:35.034 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:35.034 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:35.034 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:35.034 [8/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:35.034 [9/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:35.034 [10/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:35.034 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:35.034 [12/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:35.034 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:35.034 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:35.034 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:35.034 [16/268] Linking static target lib/librte_kvargs.a 00:01:35.294 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:35.294 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:35.294 [19/268] Linking static target lib/librte_log.a 00:01:35.294 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:35.294 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:35.294 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:35.294 [23/268] Linking static target lib/librte_pci.a 00:01:35.294 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:35.294 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:35.294 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:35.555 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:35.555 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:35.555 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:35.555 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:35.555 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:35.555 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:35.555 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:35.555 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:35.555 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:35.555 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:35.555 [37/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:35.555 [38/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:35.555 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:35.555 [40/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:35.555 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:35.555 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:35.555 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:35.555 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:35.555 [45/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:35.555 [46/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:35.555 [47/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:35.555 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:35.555 [49/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:35.555 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:35.555 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:35.555 [52/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:35.555 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:35.555 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:35.555 [55/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.555 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:35.555 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:35.555 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:35.555 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:35.555 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:35.555 [61/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:35.555 [62/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:35.555 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:35.555 [64/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:35.555 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:35.555 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:35.555 [67/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:35.555 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:35.555 [69/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:35.555 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:35.555 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:35.555 [72/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:35.555 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:35.555 [74/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:35.555 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:35.555 [76/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:35.555 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:35.555 [78/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:35.555 [79/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:35.555 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:35.555 [81/268] Linking static target lib/librte_meter.a 00:01:35.555 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:35.555 [83/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:35.555 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:35.555 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:35.555 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:35.555 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:35.555 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:35.814 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:35.814 [90/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:35.814 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:35.814 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:35.814 [93/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.814 [94/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:35.814 [95/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:35.814 [96/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:35.814 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:35.814 [98/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:35.814 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:35.814 [100/268] Linking static target lib/librte_telemetry.a 00:01:35.814 [101/268] Linking static target lib/librte_ring.a 00:01:35.814 [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:35.814 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:35.814 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:35.814 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:35.814 [106/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:35.814 [107/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:35.814 [108/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:35.814 [109/268] Linking static target lib/librte_mempool.a 00:01:35.814 [110/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:35.814 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:35.814 [112/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:35.814 [113/268] Linking static target lib/librte_rcu.a 00:01:35.814 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:35.814 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:35.814 [116/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:35.814 [117/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:35.814 [118/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:35.814 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:35.814 [120/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:35.814 [121/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:35.814 [122/268] Linking static target lib/librte_net.a 00:01:35.814 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:35.814 [124/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:35.814 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:35.814 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:35.814 [127/268] Linking static target lib/librte_eal.a 00:01:35.814 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:35.814 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:35.814 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:35.814 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:35.814 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:35.814 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:35.814 [134/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:35.814 [135/268] Linking static target lib/librte_cmdline.a 00:01:35.814 [136/268] Linking static target lib/librte_mbuf.a 00:01:35.814 [137/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.814 [138/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.072 [139/268] Linking target lib/librte_log.so.24.1 00:01:36.072 [140/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:36.072 [141/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.072 [142/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:36.072 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:36.073 [144/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:36.073 [145/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.073 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:36.073 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:36.073 [148/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:36.073 [149/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.073 [150/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:36.073 [151/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:36.073 [152/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:36.073 [153/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:36.073 [154/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:36.073 [155/268] Linking static target lib/librte_dmadev.a 00:01:36.073 [156/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:36.073 [157/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:36.073 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:36.073 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:36.073 [160/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:36.073 [161/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:36.073 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:36.073 [163/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:36.073 [164/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:36.073 [165/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.073 [166/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:36.073 [167/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:36.073 [168/268] Linking static target lib/librte_timer.a 00:01:36.073 [169/268] Linking static target lib/librte_reorder.a 00:01:36.073 [170/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:36.073 [171/268] Linking target lib/librte_kvargs.so.24.1 00:01:36.073 [172/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:36.073 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:36.073 [174/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:36.073 [175/268] Linking target lib/librte_telemetry.so.24.1 00:01:36.073 [176/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:36.073 [177/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:36.073 [178/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:36.073 [179/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:36.330 [180/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:36.330 [181/268] Linking static target lib/librte_compressdev.a 00:01:36.330 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:36.330 [183/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:36.330 [184/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:36.330 [185/268] Linking static target lib/librte_security.a 00:01:36.330 [186/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:36.330 [187/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:36.330 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:36.331 [189/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:36.331 [190/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:36.331 [191/268] Linking static target lib/librte_power.a 00:01:36.331 [192/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:36.331 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:36.331 [194/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:36.331 [195/268] Linking static target lib/librte_hash.a 00:01:36.331 [196/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:36.331 [197/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:36.331 [198/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:36.331 [199/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:36.331 [200/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:36.331 [201/268] Linking static target drivers/librte_bus_pci.a 00:01:36.331 [202/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:36.331 [203/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.590 [204/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:36.590 [205/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:36.590 [206/268] Linking static target drivers/librte_bus_vdev.a 00:01:36.590 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:36.590 [208/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:36.590 [209/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:36.590 [210/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:36.590 [211/268] Linking static target drivers/librte_mempool_ring.a 00:01:36.590 [212/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.590 [213/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.590 [214/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.590 [215/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:36.590 [216/268] Linking static target lib/librte_cryptodev.a 00:01:36.590 [217/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.848 [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.848 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.848 [220/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.848 [221/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.106 [222/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:37.106 [223/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:37.106 [224/268] Linking static target lib/librte_ethdev.a 00:01:37.107 [225/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.107 [226/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.107 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.043 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:38.043 [229/268] Linking static target lib/librte_vhost.a 00:01:38.610 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.986 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.256 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.245 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.245 [234/268] Linking target lib/librte_eal.so.24.1 00:01:46.245 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:46.245 [236/268] Linking target lib/librte_timer.so.24.1 00:01:46.245 [237/268] Linking target lib/librte_ring.so.24.1 00:01:46.245 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:46.245 [239/268] Linking target lib/librte_meter.so.24.1 00:01:46.245 [240/268] Linking target lib/librte_pci.so.24.1 00:01:46.245 [241/268] Linking target lib/librte_dmadev.so.24.1 00:01:46.504 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:46.504 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:46.504 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:46.504 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:46.504 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:46.504 [247/268] Linking target lib/librte_rcu.so.24.1 00:01:46.504 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:46.504 [249/268] Linking target lib/librte_mempool.so.24.1 00:01:46.504 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:46.504 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:46.761 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:46.761 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:46.761 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:46.761 [255/268] Linking target lib/librte_compressdev.so.24.1 00:01:46.761 [256/268] Linking target lib/librte_reorder.so.24.1 00:01:46.761 [257/268] Linking target lib/librte_net.so.24.1 00:01:46.761 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:01:47.019 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:47.019 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:47.019 [261/268] Linking target lib/librte_security.so.24.1 00:01:47.019 [262/268] Linking target lib/librte_hash.so.24.1 00:01:47.019 [263/268] Linking target lib/librte_cmdline.so.24.1 00:01:47.019 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:47.019 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:47.019 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:47.278 [267/268] Linking target lib/librte_power.so.24.1 00:01:47.278 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:47.278 INFO: autodetecting backend as ninja 00:01:47.278 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:01:48.214 CC lib/log/log.o 00:01:48.214 CC lib/log/log_flags.o 00:01:48.214 CC lib/log/log_deprecated.o 00:01:48.214 CC lib/ut_mock/mock.o 00:01:48.214 CC lib/ut/ut.o 00:01:48.214 LIB libspdk_log.a 00:01:48.214 LIB libspdk_ut.a 00:01:48.214 LIB libspdk_ut_mock.a 00:01:48.473 SO libspdk_log.so.7.0 00:01:48.473 SO libspdk_ut.so.2.0 00:01:48.473 SO libspdk_ut_mock.so.6.0 00:01:48.473 SYMLINK libspdk_log.so 00:01:48.473 SYMLINK libspdk_ut_mock.so 00:01:48.473 SYMLINK libspdk_ut.so 00:01:48.731 CC lib/util/base64.o 00:01:48.731 CC lib/util/bit_array.o 00:01:48.731 CC lib/util/crc16.o 00:01:48.732 CC lib/util/cpuset.o 00:01:48.732 CC lib/util/crc32.o 00:01:48.732 CC lib/util/crc32c.o 00:01:48.732 CC lib/util/crc32_ieee.o 00:01:48.732 CC lib/util/crc64.o 00:01:48.732 CC lib/ioat/ioat.o 00:01:48.732 CC lib/util/dif.o 00:01:48.732 CC lib/dma/dma.o 00:01:48.732 CC lib/util/fd.o 00:01:48.732 CC lib/util/file.o 00:01:48.732 CC lib/util/hexlify.o 00:01:48.732 CC lib/util/iov.o 00:01:48.732 CXX lib/trace_parser/trace.o 00:01:48.732 CC lib/util/math.o 00:01:48.732 CC lib/util/pipe.o 00:01:48.732 CC lib/util/strerror_tls.o 00:01:48.732 CC lib/util/string.o 00:01:48.732 CC lib/util/uuid.o 00:01:48.732 CC lib/util/fd_group.o 00:01:48.732 CC lib/util/xor.o 00:01:48.732 CC lib/util/zipf.o 00:01:48.990 CC lib/vfio_user/host/vfio_user_pci.o 00:01:48.990 CC lib/vfio_user/host/vfio_user.o 00:01:48.990 LIB libspdk_dma.a 00:01:48.990 SO libspdk_dma.so.4.0 00:01:48.990 LIB libspdk_ioat.a 00:01:48.990 SYMLINK libspdk_dma.so 00:01:48.990 SO libspdk_ioat.so.7.0 00:01:48.990 SYMLINK libspdk_ioat.so 00:01:48.990 LIB libspdk_vfio_user.a 00:01:49.248 SO libspdk_vfio_user.so.5.0 00:01:49.248 LIB libspdk_util.a 00:01:49.248 SYMLINK libspdk_vfio_user.so 00:01:49.248 SO libspdk_util.so.9.1 00:01:49.248 SYMLINK libspdk_util.so 00:01:49.506 LIB libspdk_trace_parser.a 00:01:49.506 SO libspdk_trace_parser.so.5.0 00:01:49.506 SYMLINK libspdk_trace_parser.so 00:01:49.764 CC lib/env_dpdk/env.o 00:01:49.764 CC lib/env_dpdk/memory.o 00:01:49.764 CC lib/env_dpdk/pci.o 00:01:49.764 CC lib/rdma_utils/rdma_utils.o 00:01:49.764 CC lib/rdma_provider/common.o 00:01:49.764 CC lib/env_dpdk/init.o 00:01:49.764 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:49.764 CC lib/env_dpdk/threads.o 00:01:49.764 CC lib/env_dpdk/pci_ioat.o 00:01:49.764 CC lib/env_dpdk/pci_virtio.o 00:01:49.764 CC lib/env_dpdk/pci_vmd.o 00:01:49.764 CC lib/vmd/vmd.o 00:01:49.764 CC lib/env_dpdk/pci_idxd.o 00:01:49.764 CC lib/env_dpdk/pci_event.o 00:01:49.764 CC lib/vmd/led.o 00:01:49.764 CC lib/env_dpdk/sigbus_handler.o 00:01:49.764 CC lib/env_dpdk/pci_dpdk.o 00:01:49.764 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:49.764 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:49.764 CC lib/idxd/idxd.o 00:01:49.764 CC lib/idxd/idxd_user.o 00:01:49.764 CC lib/json/json_parse.o 00:01:49.764 CC lib/idxd/idxd_kernel.o 00:01:49.764 CC lib/json/json_util.o 00:01:49.764 CC lib/json/json_write.o 00:01:49.764 CC lib/conf/conf.o 00:01:49.764 LIB libspdk_rdma_provider.a 00:01:49.764 LIB libspdk_conf.a 00:01:50.022 LIB libspdk_rdma_utils.a 00:01:50.022 SO libspdk_rdma_provider.so.6.0 00:01:50.022 SO libspdk_rdma_utils.so.1.0 00:01:50.022 SO libspdk_conf.so.6.0 00:01:50.022 LIB libspdk_json.a 00:01:50.022 SO libspdk_json.so.6.0 00:01:50.022 SYMLINK libspdk_rdma_provider.so 00:01:50.022 SYMLINK libspdk_conf.so 00:01:50.022 SYMLINK libspdk_rdma_utils.so 00:01:50.022 SYMLINK libspdk_json.so 00:01:50.022 LIB libspdk_idxd.a 00:01:50.022 SO libspdk_idxd.so.12.0 00:01:50.280 LIB libspdk_vmd.a 00:01:50.280 SO libspdk_vmd.so.6.0 00:01:50.280 SYMLINK libspdk_idxd.so 00:01:50.280 SYMLINK libspdk_vmd.so 00:01:50.280 CC lib/jsonrpc/jsonrpc_server.o 00:01:50.280 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:50.280 CC lib/jsonrpc/jsonrpc_client.o 00:01:50.280 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:50.538 LIB libspdk_jsonrpc.a 00:01:50.538 SO libspdk_jsonrpc.so.6.0 00:01:50.538 SYMLINK libspdk_jsonrpc.so 00:01:50.538 LIB libspdk_env_dpdk.a 00:01:50.813 SO libspdk_env_dpdk.so.14.1 00:01:50.813 SYMLINK libspdk_env_dpdk.so 00:01:50.813 CC lib/rpc/rpc.o 00:01:51.137 LIB libspdk_rpc.a 00:01:51.137 SO libspdk_rpc.so.6.0 00:01:51.137 SYMLINK libspdk_rpc.so 00:01:51.405 CC lib/keyring/keyring.o 00:01:51.405 CC lib/trace/trace.o 00:01:51.405 CC lib/trace/trace_flags.o 00:01:51.405 CC lib/keyring/keyring_rpc.o 00:01:51.405 CC lib/trace/trace_rpc.o 00:01:51.405 CC lib/notify/notify.o 00:01:51.405 CC lib/notify/notify_rpc.o 00:01:51.663 LIB libspdk_notify.a 00:01:51.664 LIB libspdk_keyring.a 00:01:51.664 SO libspdk_notify.so.6.0 00:01:51.664 SO libspdk_keyring.so.1.0 00:01:51.664 LIB libspdk_trace.a 00:01:51.664 SO libspdk_trace.so.10.0 00:01:51.664 SYMLINK libspdk_notify.so 00:01:51.664 SYMLINK libspdk_keyring.so 00:01:51.923 SYMLINK libspdk_trace.so 00:01:52.211 CC lib/sock/sock.o 00:01:52.211 CC lib/sock/sock_rpc.o 00:01:52.211 CC lib/thread/thread.o 00:01:52.211 CC lib/thread/iobuf.o 00:01:52.469 LIB libspdk_sock.a 00:01:52.469 SO libspdk_sock.so.10.0 00:01:52.469 SYMLINK libspdk_sock.so 00:01:52.727 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:52.727 CC lib/nvme/nvme_ctrlr.o 00:01:52.727 CC lib/nvme/nvme_fabric.o 00:01:52.727 CC lib/nvme/nvme_ns_cmd.o 00:01:52.727 CC lib/nvme/nvme_ns.o 00:01:52.727 CC lib/nvme/nvme_pcie_common.o 00:01:52.727 CC lib/nvme/nvme_pcie.o 00:01:52.727 CC lib/nvme/nvme_qpair.o 00:01:52.727 CC lib/nvme/nvme.o 00:01:52.727 CC lib/nvme/nvme_quirks.o 00:01:52.727 CC lib/nvme/nvme_transport.o 00:01:52.727 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:52.727 CC lib/nvme/nvme_discovery.o 00:01:52.727 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:52.727 CC lib/nvme/nvme_tcp.o 00:01:52.727 CC lib/nvme/nvme_opal.o 00:01:52.727 CC lib/nvme/nvme_io_msg.o 00:01:52.727 CC lib/nvme/nvme_poll_group.o 00:01:52.727 CC lib/nvme/nvme_zns.o 00:01:52.727 CC lib/nvme/nvme_stubs.o 00:01:52.727 CC lib/nvme/nvme_auth.o 00:01:52.727 CC lib/nvme/nvme_cuse.o 00:01:52.727 CC lib/nvme/nvme_vfio_user.o 00:01:52.727 CC lib/nvme/nvme_rdma.o 00:01:53.294 LIB libspdk_thread.a 00:01:53.294 SO libspdk_thread.so.10.1 00:01:53.294 SYMLINK libspdk_thread.so 00:01:53.552 CC lib/init/json_config.o 00:01:53.552 CC lib/init/subsystem.o 00:01:53.552 CC lib/init/rpc.o 00:01:53.552 CC lib/init/subsystem_rpc.o 00:01:53.552 CC lib/vfu_tgt/tgt_endpoint.o 00:01:53.552 CC lib/vfu_tgt/tgt_rpc.o 00:01:53.552 CC lib/accel/accel.o 00:01:53.552 CC lib/accel/accel_rpc.o 00:01:53.552 CC lib/blob/blobstore.o 00:01:53.552 CC lib/accel/accel_sw.o 00:01:53.552 CC lib/virtio/virtio_vhost_user.o 00:01:53.552 CC lib/blob/request.o 00:01:53.552 CC lib/virtio/virtio.o 00:01:53.552 CC lib/virtio/virtio_vfio_user.o 00:01:53.552 CC lib/blob/zeroes.o 00:01:53.552 CC lib/virtio/virtio_pci.o 00:01:53.552 CC lib/blob/blob_bs_dev.o 00:01:53.810 LIB libspdk_init.a 00:01:53.810 SO libspdk_init.so.5.0 00:01:53.810 LIB libspdk_virtio.a 00:01:53.810 LIB libspdk_vfu_tgt.a 00:01:53.810 SO libspdk_vfu_tgt.so.3.0 00:01:53.810 SO libspdk_virtio.so.7.0 00:01:53.810 SYMLINK libspdk_init.so 00:01:53.810 SYMLINK libspdk_vfu_tgt.so 00:01:54.069 SYMLINK libspdk_virtio.so 00:01:54.069 CC lib/event/app.o 00:01:54.069 CC lib/event/reactor.o 00:01:54.069 CC lib/event/log_rpc.o 00:01:54.069 CC lib/event/app_rpc.o 00:01:54.069 CC lib/event/scheduler_static.o 00:01:54.327 LIB libspdk_accel.a 00:01:54.327 SO libspdk_accel.so.15.1 00:01:54.327 SYMLINK libspdk_accel.so 00:01:54.327 LIB libspdk_nvme.a 00:01:54.586 LIB libspdk_event.a 00:01:54.586 SO libspdk_nvme.so.13.1 00:01:54.586 SO libspdk_event.so.14.0 00:01:54.586 SYMLINK libspdk_event.so 00:01:54.586 CC lib/bdev/bdev.o 00:01:54.586 CC lib/bdev/bdev_rpc.o 00:01:54.586 CC lib/bdev/bdev_zone.o 00:01:54.586 CC lib/bdev/part.o 00:01:54.586 CC lib/bdev/scsi_nvme.o 00:01:54.844 SYMLINK libspdk_nvme.so 00:01:55.779 LIB libspdk_blob.a 00:01:55.779 SO libspdk_blob.so.11.0 00:01:55.779 SYMLINK libspdk_blob.so 00:01:56.037 CC lib/blobfs/blobfs.o 00:01:56.037 CC lib/blobfs/tree.o 00:01:56.037 CC lib/lvol/lvol.o 00:01:56.295 LIB libspdk_bdev.a 00:01:56.553 SO libspdk_bdev.so.15.1 00:01:56.553 SYMLINK libspdk_bdev.so 00:01:56.553 LIB libspdk_blobfs.a 00:01:56.553 SO libspdk_blobfs.so.10.0 00:01:56.553 LIB libspdk_lvol.a 00:01:56.811 SO libspdk_lvol.so.10.0 00:01:56.811 SYMLINK libspdk_blobfs.so 00:01:56.811 SYMLINK libspdk_lvol.so 00:01:56.811 CC lib/scsi/dev.o 00:01:56.811 CC lib/nbd/nbd.o 00:01:56.811 CC lib/scsi/lun.o 00:01:56.811 CC lib/nbd/nbd_rpc.o 00:01:56.811 CC lib/scsi/port.o 00:01:56.811 CC lib/scsi/scsi.o 00:01:56.811 CC lib/scsi/scsi_bdev.o 00:01:56.811 CC lib/ftl/ftl_core.o 00:01:56.811 CC lib/nvmf/ctrlr.o 00:01:56.811 CC lib/scsi/scsi_pr.o 00:01:56.811 CC lib/ftl/ftl_init.o 00:01:56.811 CC lib/nvmf/ctrlr_discovery.o 00:01:56.811 CC lib/scsi/scsi_rpc.o 00:01:56.811 CC lib/ftl/ftl_layout.o 00:01:56.811 CC lib/nvmf/ctrlr_bdev.o 00:01:56.811 CC lib/scsi/task.o 00:01:56.811 CC lib/ublk/ublk.o 00:01:56.811 CC lib/ftl/ftl_debug.o 00:01:56.811 CC lib/nvmf/subsystem.o 00:01:56.811 CC lib/ublk/ublk_rpc.o 00:01:56.811 CC lib/ftl/ftl_io.o 00:01:56.811 CC lib/nvmf/nvmf.o 00:01:56.811 CC lib/ftl/ftl_sb.o 00:01:56.811 CC lib/nvmf/nvmf_rpc.o 00:01:56.811 CC lib/ftl/ftl_l2p.o 00:01:56.811 CC lib/nvmf/transport.o 00:01:56.811 CC lib/ftl/ftl_l2p_flat.o 00:01:56.811 CC lib/ftl/ftl_nv_cache.o 00:01:56.812 CC lib/nvmf/tcp.o 00:01:56.812 CC lib/nvmf/stubs.o 00:01:56.812 CC lib/ftl/ftl_band.o 00:01:56.812 CC lib/nvmf/mdns_server.o 00:01:56.812 CC lib/nvmf/vfio_user.o 00:01:56.812 CC lib/ftl/ftl_band_ops.o 00:01:56.812 CC lib/ftl/ftl_writer.o 00:01:56.812 CC lib/ftl/ftl_rq.o 00:01:56.812 CC lib/nvmf/rdma.o 00:01:56.812 CC lib/ftl/ftl_reloc.o 00:01:56.812 CC lib/nvmf/auth.o 00:01:56.812 CC lib/ftl/ftl_p2l.o 00:01:56.812 CC lib/ftl/ftl_l2p_cache.o 00:01:56.812 CC lib/ftl/mngt/ftl_mngt.o 00:01:56.812 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:56.812 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:56.812 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:56.812 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:56.812 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:56.812 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:56.812 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:56.812 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:56.812 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:56.812 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:56.812 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:56.812 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:56.812 CC lib/ftl/utils/ftl_conf.o 00:01:56.812 CC lib/ftl/utils/ftl_md.o 00:01:56.812 CC lib/ftl/utils/ftl_mempool.o 00:01:56.812 CC lib/ftl/utils/ftl_bitmap.o 00:01:56.812 CC lib/ftl/utils/ftl_property.o 00:01:56.812 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:56.812 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:56.812 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:56.812 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:56.812 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:56.812 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:56.812 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:56.812 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:56.812 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:56.812 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:56.812 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:56.812 CC lib/ftl/base/ftl_base_dev.o 00:01:56.812 CC lib/ftl/base/ftl_base_bdev.o 00:01:56.812 CC lib/ftl/ftl_trace.o 00:01:57.378 LIB libspdk_nbd.a 00:01:57.378 SO libspdk_nbd.so.7.0 00:01:57.636 LIB libspdk_ublk.a 00:01:57.636 SYMLINK libspdk_nbd.so 00:01:57.636 SO libspdk_ublk.so.3.0 00:01:57.636 LIB libspdk_scsi.a 00:01:57.636 SO libspdk_scsi.so.9.0 00:01:57.636 SYMLINK libspdk_ublk.so 00:01:57.636 SYMLINK libspdk_scsi.so 00:01:57.893 LIB libspdk_ftl.a 00:01:57.893 SO libspdk_ftl.so.9.0 00:01:57.893 CC lib/iscsi/conn.o 00:01:57.893 CC lib/vhost/vhost.o 00:01:57.893 CC lib/iscsi/init_grp.o 00:01:57.893 CC lib/vhost/vhost_rpc.o 00:01:57.893 CC lib/iscsi/iscsi.o 00:01:57.893 CC lib/vhost/vhost_scsi.o 00:01:57.893 CC lib/iscsi/md5.o 00:01:57.893 CC lib/vhost/vhost_blk.o 00:01:57.893 CC lib/vhost/rte_vhost_user.o 00:01:57.893 CC lib/iscsi/param.o 00:01:57.893 CC lib/iscsi/portal_grp.o 00:01:57.893 CC lib/iscsi/tgt_node.o 00:01:57.893 CC lib/iscsi/iscsi_subsystem.o 00:01:57.893 CC lib/iscsi/iscsi_rpc.o 00:01:57.893 CC lib/iscsi/task.o 00:01:58.151 SYMLINK libspdk_ftl.so 00:01:58.715 LIB libspdk_nvmf.a 00:01:58.715 SO libspdk_nvmf.so.18.1 00:01:58.715 LIB libspdk_vhost.a 00:01:58.715 SO libspdk_vhost.so.8.0 00:01:58.715 SYMLINK libspdk_nvmf.so 00:01:58.973 SYMLINK libspdk_vhost.so 00:01:58.973 LIB libspdk_iscsi.a 00:01:58.973 SO libspdk_iscsi.so.8.0 00:01:59.230 SYMLINK libspdk_iscsi.so 00:01:59.796 CC module/vfu_device/vfu_virtio.o 00:01:59.796 CC module/vfu_device/vfu_virtio_rpc.o 00:01:59.796 CC module/vfu_device/vfu_virtio_blk.o 00:01:59.796 CC module/vfu_device/vfu_virtio_scsi.o 00:01:59.796 CC module/env_dpdk/env_dpdk_rpc.o 00:01:59.796 CC module/accel/ioat/accel_ioat.o 00:01:59.796 LIB libspdk_env_dpdk_rpc.a 00:01:59.796 CC module/accel/ioat/accel_ioat_rpc.o 00:01:59.796 CC module/accel/iaa/accel_iaa.o 00:01:59.796 CC module/accel/iaa/accel_iaa_rpc.o 00:01:59.796 CC module/accel/dsa/accel_dsa_rpc.o 00:01:59.796 CC module/accel/dsa/accel_dsa.o 00:01:59.796 CC module/accel/error/accel_error.o 00:01:59.796 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:59.796 CC module/accel/error/accel_error_rpc.o 00:01:59.796 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:59.796 CC module/keyring/linux/keyring_rpc.o 00:01:59.796 CC module/keyring/linux/keyring.o 00:01:59.796 CC module/keyring/file/keyring.o 00:01:59.796 CC module/blob/bdev/blob_bdev.o 00:01:59.796 CC module/scheduler/gscheduler/gscheduler.o 00:01:59.796 CC module/keyring/file/keyring_rpc.o 00:01:59.796 CC module/sock/posix/posix.o 00:01:59.796 SO libspdk_env_dpdk_rpc.so.6.0 00:01:59.796 SYMLINK libspdk_env_dpdk_rpc.so 00:02:00.054 LIB libspdk_keyring_file.a 00:02:00.054 LIB libspdk_scheduler_dpdk_governor.a 00:02:00.054 LIB libspdk_keyring_linux.a 00:02:00.054 LIB libspdk_scheduler_gscheduler.a 00:02:00.054 LIB libspdk_accel_error.a 00:02:00.054 LIB libspdk_accel_ioat.a 00:02:00.054 LIB libspdk_accel_iaa.a 00:02:00.054 SO libspdk_scheduler_gscheduler.so.4.0 00:02:00.054 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:00.054 SO libspdk_accel_ioat.so.6.0 00:02:00.054 LIB libspdk_scheduler_dynamic.a 00:02:00.054 SO libspdk_accel_error.so.2.0 00:02:00.054 SO libspdk_keyring_file.so.1.0 00:02:00.054 SO libspdk_keyring_linux.so.1.0 00:02:00.054 SO libspdk_accel_iaa.so.3.0 00:02:00.054 SO libspdk_scheduler_dynamic.so.4.0 00:02:00.054 SYMLINK libspdk_scheduler_gscheduler.so 00:02:00.054 LIB libspdk_accel_dsa.a 00:02:00.054 LIB libspdk_blob_bdev.a 00:02:00.054 SYMLINK libspdk_accel_error.so 00:02:00.054 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:00.054 SYMLINK libspdk_accel_ioat.so 00:02:00.054 SYMLINK libspdk_keyring_file.so 00:02:00.054 SYMLINK libspdk_keyring_linux.so 00:02:00.054 SYMLINK libspdk_accel_iaa.so 00:02:00.054 SO libspdk_accel_dsa.so.5.0 00:02:00.054 SO libspdk_blob_bdev.so.11.0 00:02:00.055 SYMLINK libspdk_scheduler_dynamic.so 00:02:00.055 SYMLINK libspdk_accel_dsa.so 00:02:00.055 LIB libspdk_vfu_device.a 00:02:00.055 SYMLINK libspdk_blob_bdev.so 00:02:00.313 SO libspdk_vfu_device.so.3.0 00:02:00.313 SYMLINK libspdk_vfu_device.so 00:02:00.313 LIB libspdk_sock_posix.a 00:02:00.571 SO libspdk_sock_posix.so.6.0 00:02:00.571 SYMLINK libspdk_sock_posix.so 00:02:00.571 CC module/bdev/gpt/gpt.o 00:02:00.571 CC module/bdev/gpt/vbdev_gpt.o 00:02:00.571 CC module/blobfs/bdev/blobfs_bdev.o 00:02:00.571 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:00.571 CC module/bdev/error/vbdev_error.o 00:02:00.571 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:00.571 CC module/bdev/lvol/vbdev_lvol.o 00:02:00.571 CC module/bdev/error/vbdev_error_rpc.o 00:02:00.571 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:00.571 CC module/bdev/split/vbdev_split.o 00:02:00.571 CC module/bdev/split/vbdev_split_rpc.o 00:02:00.571 CC module/bdev/delay/vbdev_delay.o 00:02:00.571 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:00.571 CC module/bdev/malloc/bdev_malloc.o 00:02:00.571 CC module/bdev/aio/bdev_aio.o 00:02:00.571 CC module/bdev/nvme/bdev_nvme.o 00:02:00.571 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:00.571 CC module/bdev/aio/bdev_aio_rpc.o 00:02:00.571 CC module/bdev/raid/bdev_raid.o 00:02:00.571 CC module/bdev/raid/bdev_raid_sb.o 00:02:00.571 CC module/bdev/raid/bdev_raid_rpc.o 00:02:00.571 CC module/bdev/nvme/bdev_mdns_client.o 00:02:00.571 CC module/bdev/nvme/nvme_rpc.o 00:02:00.571 CC module/bdev/raid/raid0.o 00:02:00.571 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:00.571 CC module/bdev/nvme/vbdev_opal.o 00:02:00.571 CC module/bdev/raid/raid1.o 00:02:00.571 CC module/bdev/ftl/bdev_ftl.o 00:02:00.571 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:00.571 CC module/bdev/passthru/vbdev_passthru.o 00:02:00.571 CC module/bdev/raid/concat.o 00:02:00.571 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:00.571 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:00.571 CC module/bdev/null/bdev_null.o 00:02:00.571 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:00.571 CC module/bdev/null/bdev_null_rpc.o 00:02:00.571 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:00.571 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:00.571 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:00.571 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:00.571 CC module/bdev/iscsi/bdev_iscsi.o 00:02:00.571 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:00.829 LIB libspdk_blobfs_bdev.a 00:02:00.829 SO libspdk_blobfs_bdev.so.6.0 00:02:00.829 LIB libspdk_bdev_split.a 00:02:00.829 SO libspdk_bdev_split.so.6.0 00:02:00.829 LIB libspdk_bdev_gpt.a 00:02:00.829 SYMLINK libspdk_blobfs_bdev.so 00:02:01.086 LIB libspdk_bdev_null.a 00:02:01.086 LIB libspdk_bdev_error.a 00:02:01.086 SO libspdk_bdev_gpt.so.6.0 00:02:01.086 LIB libspdk_bdev_passthru.a 00:02:01.086 SO libspdk_bdev_error.so.6.0 00:02:01.086 LIB libspdk_bdev_ftl.a 00:02:01.086 SO libspdk_bdev_null.so.6.0 00:02:01.086 SYMLINK libspdk_bdev_split.so 00:02:01.086 LIB libspdk_bdev_aio.a 00:02:01.086 LIB libspdk_bdev_delay.a 00:02:01.086 LIB libspdk_bdev_zone_block.a 00:02:01.086 SO libspdk_bdev_passthru.so.6.0 00:02:01.086 LIB libspdk_bdev_malloc.a 00:02:01.086 SYMLINK libspdk_bdev_gpt.so 00:02:01.086 SO libspdk_bdev_ftl.so.6.0 00:02:01.086 LIB libspdk_bdev_iscsi.a 00:02:01.086 SO libspdk_bdev_aio.so.6.0 00:02:01.086 SO libspdk_bdev_delay.so.6.0 00:02:01.086 SO libspdk_bdev_zone_block.so.6.0 00:02:01.086 SYMLINK libspdk_bdev_error.so 00:02:01.086 SO libspdk_bdev_malloc.so.6.0 00:02:01.086 SYMLINK libspdk_bdev_null.so 00:02:01.086 SO libspdk_bdev_iscsi.so.6.0 00:02:01.086 SYMLINK libspdk_bdev_passthru.so 00:02:01.086 LIB libspdk_bdev_lvol.a 00:02:01.086 SYMLINK libspdk_bdev_ftl.so 00:02:01.086 SYMLINK libspdk_bdev_aio.so 00:02:01.086 SYMLINK libspdk_bdev_malloc.so 00:02:01.086 SYMLINK libspdk_bdev_zone_block.so 00:02:01.086 SYMLINK libspdk_bdev_delay.so 00:02:01.086 LIB libspdk_bdev_virtio.a 00:02:01.086 SO libspdk_bdev_lvol.so.6.0 00:02:01.086 SYMLINK libspdk_bdev_iscsi.so 00:02:01.086 SO libspdk_bdev_virtio.so.6.0 00:02:01.086 SYMLINK libspdk_bdev_lvol.so 00:02:01.344 SYMLINK libspdk_bdev_virtio.so 00:02:01.344 LIB libspdk_bdev_raid.a 00:02:01.617 SO libspdk_bdev_raid.so.6.0 00:02:01.617 SYMLINK libspdk_bdev_raid.so 00:02:02.185 LIB libspdk_bdev_nvme.a 00:02:02.186 SO libspdk_bdev_nvme.so.7.0 00:02:02.444 SYMLINK libspdk_bdev_nvme.so 00:02:03.013 CC module/event/subsystems/vmd/vmd.o 00:02:03.013 CC module/event/subsystems/iobuf/iobuf.o 00:02:03.013 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:03.013 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:03.013 CC module/event/subsystems/sock/sock.o 00:02:03.013 CC module/event/subsystems/scheduler/scheduler.o 00:02:03.013 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:03.013 CC module/event/subsystems/keyring/keyring.o 00:02:03.013 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:03.272 LIB libspdk_event_keyring.a 00:02:03.272 LIB libspdk_event_vhost_blk.a 00:02:03.272 LIB libspdk_event_iobuf.a 00:02:03.272 LIB libspdk_event_scheduler.a 00:02:03.272 LIB libspdk_event_vmd.a 00:02:03.272 LIB libspdk_event_sock.a 00:02:03.272 LIB libspdk_event_vfu_tgt.a 00:02:03.272 SO libspdk_event_vhost_blk.so.3.0 00:02:03.272 SO libspdk_event_keyring.so.1.0 00:02:03.272 SO libspdk_event_vmd.so.6.0 00:02:03.272 SO libspdk_event_scheduler.so.4.0 00:02:03.272 SO libspdk_event_iobuf.so.3.0 00:02:03.272 SO libspdk_event_sock.so.5.0 00:02:03.272 SO libspdk_event_vfu_tgt.so.3.0 00:02:03.272 SYMLINK libspdk_event_vhost_blk.so 00:02:03.272 SYMLINK libspdk_event_keyring.so 00:02:03.272 SYMLINK libspdk_event_scheduler.so 00:02:03.272 SYMLINK libspdk_event_vmd.so 00:02:03.272 SYMLINK libspdk_event_iobuf.so 00:02:03.272 SYMLINK libspdk_event_sock.so 00:02:03.272 SYMLINK libspdk_event_vfu_tgt.so 00:02:03.539 CC module/event/subsystems/accel/accel.o 00:02:03.798 LIB libspdk_event_accel.a 00:02:03.798 SO libspdk_event_accel.so.6.0 00:02:03.798 SYMLINK libspdk_event_accel.so 00:02:04.056 CC module/event/subsystems/bdev/bdev.o 00:02:04.314 LIB libspdk_event_bdev.a 00:02:04.314 SO libspdk_event_bdev.so.6.0 00:02:04.314 SYMLINK libspdk_event_bdev.so 00:02:04.572 CC module/event/subsystems/ublk/ublk.o 00:02:04.572 CC module/event/subsystems/nbd/nbd.o 00:02:04.572 CC module/event/subsystems/scsi/scsi.o 00:02:04.572 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:04.572 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:04.829 LIB libspdk_event_nbd.a 00:02:04.829 LIB libspdk_event_ublk.a 00:02:04.829 LIB libspdk_event_scsi.a 00:02:04.829 SO libspdk_event_nbd.so.6.0 00:02:04.829 SO libspdk_event_ublk.so.3.0 00:02:04.829 SO libspdk_event_scsi.so.6.0 00:02:04.829 LIB libspdk_event_nvmf.a 00:02:04.829 SYMLINK libspdk_event_nbd.so 00:02:04.829 SYMLINK libspdk_event_ublk.so 00:02:04.829 SO libspdk_event_nvmf.so.6.0 00:02:04.829 SYMLINK libspdk_event_scsi.so 00:02:05.087 SYMLINK libspdk_event_nvmf.so 00:02:05.345 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:05.345 CC module/event/subsystems/iscsi/iscsi.o 00:02:05.345 LIB libspdk_event_vhost_scsi.a 00:02:05.345 LIB libspdk_event_iscsi.a 00:02:05.345 SO libspdk_event_vhost_scsi.so.3.0 00:02:05.345 SO libspdk_event_iscsi.so.6.0 00:02:05.345 SYMLINK libspdk_event_vhost_scsi.so 00:02:05.603 SYMLINK libspdk_event_iscsi.so 00:02:05.604 SO libspdk.so.6.0 00:02:05.604 SYMLINK libspdk.so 00:02:05.861 CC test/rpc_client/rpc_client_test.o 00:02:06.128 CC app/trace_record/trace_record.o 00:02:06.128 CXX app/trace/trace.o 00:02:06.128 CC app/spdk_top/spdk_top.o 00:02:06.128 TEST_HEADER include/spdk/accel.h 00:02:06.128 TEST_HEADER include/spdk/assert.h 00:02:06.128 TEST_HEADER include/spdk/accel_module.h 00:02:06.128 TEST_HEADER include/spdk/base64.h 00:02:06.128 TEST_HEADER include/spdk/barrier.h 00:02:06.128 TEST_HEADER include/spdk/bdev.h 00:02:06.128 CC app/spdk_nvme_discover/discovery_aer.o 00:02:06.128 TEST_HEADER include/spdk/bdev_module.h 00:02:06.128 TEST_HEADER include/spdk/bit_array.h 00:02:06.128 TEST_HEADER include/spdk/bdev_zone.h 00:02:06.128 CC app/spdk_nvme_perf/perf.o 00:02:06.128 TEST_HEADER include/spdk/bit_pool.h 00:02:06.128 TEST_HEADER include/spdk/blob_bdev.h 00:02:06.128 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:06.128 TEST_HEADER include/spdk/blobfs.h 00:02:06.128 CC app/spdk_nvme_identify/identify.o 00:02:06.128 TEST_HEADER include/spdk/blob.h 00:02:06.128 TEST_HEADER include/spdk/conf.h 00:02:06.128 TEST_HEADER include/spdk/crc16.h 00:02:06.128 TEST_HEADER include/spdk/config.h 00:02:06.128 TEST_HEADER include/spdk/cpuset.h 00:02:06.128 CC app/spdk_lspci/spdk_lspci.o 00:02:06.128 TEST_HEADER include/spdk/crc32.h 00:02:06.128 TEST_HEADER include/spdk/crc64.h 00:02:06.128 TEST_HEADER include/spdk/dma.h 00:02:06.128 TEST_HEADER include/spdk/dif.h 00:02:06.128 TEST_HEADER include/spdk/endian.h 00:02:06.128 TEST_HEADER include/spdk/env_dpdk.h 00:02:06.128 TEST_HEADER include/spdk/env.h 00:02:06.128 TEST_HEADER include/spdk/event.h 00:02:06.128 TEST_HEADER include/spdk/fd_group.h 00:02:06.128 TEST_HEADER include/spdk/fd.h 00:02:06.128 TEST_HEADER include/spdk/file.h 00:02:06.128 TEST_HEADER include/spdk/ftl.h 00:02:06.128 TEST_HEADER include/spdk/hexlify.h 00:02:06.128 TEST_HEADER include/spdk/histogram_data.h 00:02:06.128 TEST_HEADER include/spdk/gpt_spec.h 00:02:06.128 TEST_HEADER include/spdk/idxd.h 00:02:06.128 TEST_HEADER include/spdk/init.h 00:02:06.128 TEST_HEADER include/spdk/idxd_spec.h 00:02:06.128 TEST_HEADER include/spdk/ioat_spec.h 00:02:06.128 TEST_HEADER include/spdk/ioat.h 00:02:06.128 TEST_HEADER include/spdk/iscsi_spec.h 00:02:06.128 TEST_HEADER include/spdk/json.h 00:02:06.128 TEST_HEADER include/spdk/keyring.h 00:02:06.128 TEST_HEADER include/spdk/jsonrpc.h 00:02:06.128 TEST_HEADER include/spdk/keyring_module.h 00:02:06.128 TEST_HEADER include/spdk/likely.h 00:02:06.128 CC app/spdk_dd/spdk_dd.o 00:02:06.128 TEST_HEADER include/spdk/lvol.h 00:02:06.128 TEST_HEADER include/spdk/log.h 00:02:06.128 TEST_HEADER include/spdk/memory.h 00:02:06.128 TEST_HEADER include/spdk/mmio.h 00:02:06.128 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:06.128 TEST_HEADER include/spdk/nbd.h 00:02:06.128 TEST_HEADER include/spdk/notify.h 00:02:06.128 TEST_HEADER include/spdk/nvme.h 00:02:06.128 TEST_HEADER include/spdk/nvme_intel.h 00:02:06.128 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:06.128 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:06.128 TEST_HEADER include/spdk/nvme_spec.h 00:02:06.128 TEST_HEADER include/spdk/nvme_zns.h 00:02:06.128 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:06.128 CC app/iscsi_tgt/iscsi_tgt.o 00:02:06.128 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:06.128 TEST_HEADER include/spdk/nvmf_spec.h 00:02:06.128 TEST_HEADER include/spdk/nvmf.h 00:02:06.128 TEST_HEADER include/spdk/nvmf_transport.h 00:02:06.128 TEST_HEADER include/spdk/opal.h 00:02:06.128 TEST_HEADER include/spdk/opal_spec.h 00:02:06.128 TEST_HEADER include/spdk/pipe.h 00:02:06.128 TEST_HEADER include/spdk/queue.h 00:02:06.128 TEST_HEADER include/spdk/pci_ids.h 00:02:06.128 CC app/nvmf_tgt/nvmf_main.o 00:02:06.128 TEST_HEADER include/spdk/rpc.h 00:02:06.128 TEST_HEADER include/spdk/scheduler.h 00:02:06.128 TEST_HEADER include/spdk/reduce.h 00:02:06.128 TEST_HEADER include/spdk/scsi.h 00:02:06.128 TEST_HEADER include/spdk/scsi_spec.h 00:02:06.128 TEST_HEADER include/spdk/string.h 00:02:06.128 TEST_HEADER include/spdk/sock.h 00:02:06.128 TEST_HEADER include/spdk/stdinc.h 00:02:06.128 TEST_HEADER include/spdk/thread.h 00:02:06.128 TEST_HEADER include/spdk/trace.h 00:02:06.128 TEST_HEADER include/spdk/trace_parser.h 00:02:06.128 TEST_HEADER include/spdk/ublk.h 00:02:06.128 TEST_HEADER include/spdk/tree.h 00:02:06.128 TEST_HEADER include/spdk/version.h 00:02:06.128 TEST_HEADER include/spdk/uuid.h 00:02:06.128 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:06.128 TEST_HEADER include/spdk/util.h 00:02:06.129 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:06.129 TEST_HEADER include/spdk/vmd.h 00:02:06.129 TEST_HEADER include/spdk/vhost.h 00:02:06.129 TEST_HEADER include/spdk/zipf.h 00:02:06.129 TEST_HEADER include/spdk/xor.h 00:02:06.129 CXX test/cpp_headers/accel_module.o 00:02:06.129 CXX test/cpp_headers/accel.o 00:02:06.129 CXX test/cpp_headers/assert.o 00:02:06.129 CC app/spdk_tgt/spdk_tgt.o 00:02:06.129 CXX test/cpp_headers/barrier.o 00:02:06.129 CXX test/cpp_headers/bdev.o 00:02:06.129 CXX test/cpp_headers/base64.o 00:02:06.129 CXX test/cpp_headers/bdev_module.o 00:02:06.129 CXX test/cpp_headers/bit_array.o 00:02:06.129 CXX test/cpp_headers/bit_pool.o 00:02:06.129 CXX test/cpp_headers/bdev_zone.o 00:02:06.129 CXX test/cpp_headers/blob_bdev.o 00:02:06.129 CXX test/cpp_headers/blobfs_bdev.o 00:02:06.129 CXX test/cpp_headers/blob.o 00:02:06.129 CXX test/cpp_headers/blobfs.o 00:02:06.129 CXX test/cpp_headers/conf.o 00:02:06.129 CXX test/cpp_headers/config.o 00:02:06.129 CXX test/cpp_headers/crc16.o 00:02:06.129 CXX test/cpp_headers/cpuset.o 00:02:06.129 CXX test/cpp_headers/crc32.o 00:02:06.129 CXX test/cpp_headers/dma.o 00:02:06.129 CXX test/cpp_headers/crc64.o 00:02:06.129 CXX test/cpp_headers/dif.o 00:02:06.129 CXX test/cpp_headers/endian.o 00:02:06.129 CXX test/cpp_headers/env_dpdk.o 00:02:06.129 CXX test/cpp_headers/env.o 00:02:06.129 CXX test/cpp_headers/event.o 00:02:06.129 CXX test/cpp_headers/fd_group.o 00:02:06.129 CXX test/cpp_headers/fd.o 00:02:06.129 CXX test/cpp_headers/file.o 00:02:06.129 CXX test/cpp_headers/ftl.o 00:02:06.129 CXX test/cpp_headers/gpt_spec.o 00:02:06.129 CXX test/cpp_headers/hexlify.o 00:02:06.129 CXX test/cpp_headers/histogram_data.o 00:02:06.129 CXX test/cpp_headers/idxd.o 00:02:06.129 CXX test/cpp_headers/idxd_spec.o 00:02:06.129 CXX test/cpp_headers/ioat.o 00:02:06.129 CXX test/cpp_headers/init.o 00:02:06.129 CXX test/cpp_headers/ioat_spec.o 00:02:06.129 CXX test/cpp_headers/json.o 00:02:06.129 CXX test/cpp_headers/iscsi_spec.o 00:02:06.129 CXX test/cpp_headers/keyring.o 00:02:06.129 CXX test/cpp_headers/jsonrpc.o 00:02:06.129 CXX test/cpp_headers/keyring_module.o 00:02:06.129 CXX test/cpp_headers/likely.o 00:02:06.129 CXX test/cpp_headers/log.o 00:02:06.129 CXX test/cpp_headers/lvol.o 00:02:06.129 CXX test/cpp_headers/memory.o 00:02:06.129 CXX test/cpp_headers/mmio.o 00:02:06.129 CXX test/cpp_headers/nbd.o 00:02:06.129 CXX test/cpp_headers/nvme.o 00:02:06.129 CXX test/cpp_headers/notify.o 00:02:06.129 CXX test/cpp_headers/nvme_ocssd.o 00:02:06.129 CXX test/cpp_headers/nvme_intel.o 00:02:06.129 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:06.129 CXX test/cpp_headers/nvmf_cmd.o 00:02:06.129 CXX test/cpp_headers/nvme_spec.o 00:02:06.129 CXX test/cpp_headers/nvme_zns.o 00:02:06.129 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:06.129 CXX test/cpp_headers/nvmf_spec.o 00:02:06.129 CXX test/cpp_headers/nvmf.o 00:02:06.129 CXX test/cpp_headers/nvmf_transport.o 00:02:06.129 CXX test/cpp_headers/opal.o 00:02:06.129 CXX test/cpp_headers/opal_spec.o 00:02:06.129 CXX test/cpp_headers/pci_ids.o 00:02:06.129 CC examples/ioat/verify/verify.o 00:02:06.129 CXX test/cpp_headers/pipe.o 00:02:06.129 CXX test/cpp_headers/queue.o 00:02:06.129 CXX test/cpp_headers/reduce.o 00:02:06.129 CC examples/ioat/perf/perf.o 00:02:06.129 CC test/thread/poller_perf/poller_perf.o 00:02:06.129 CC examples/util/zipf/zipf.o 00:02:06.129 CXX test/cpp_headers/rpc.o 00:02:06.129 CC test/app/jsoncat/jsoncat.o 00:02:06.129 CC test/env/memory/memory_ut.o 00:02:06.129 CC app/fio/bdev/fio_plugin.o 00:02:06.393 CC app/fio/nvme/fio_plugin.o 00:02:06.393 CC test/env/pci/pci_ut.o 00:02:06.393 CC test/app/stub/stub.o 00:02:06.393 CC test/dma/test_dma/test_dma.o 00:02:06.393 CC test/env/vtophys/vtophys.o 00:02:06.393 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:06.393 CC test/app/histogram_perf/histogram_perf.o 00:02:06.393 CXX test/cpp_headers/scheduler.o 00:02:06.393 CC test/app/bdev_svc/bdev_svc.o 00:02:06.393 LINK spdk_lspci 00:02:06.656 LINK rpc_client_test 00:02:06.656 LINK interrupt_tgt 00:02:06.656 LINK spdk_nvme_discover 00:02:06.656 LINK jsoncat 00:02:06.656 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:06.656 LINK spdk_tgt 00:02:06.656 LINK zipf 00:02:06.656 CC test/env/mem_callbacks/mem_callbacks.o 00:02:06.656 LINK nvmf_tgt 00:02:06.656 LINK poller_perf 00:02:06.656 CXX test/cpp_headers/scsi.o 00:02:06.656 CXX test/cpp_headers/scsi_spec.o 00:02:06.656 CXX test/cpp_headers/sock.o 00:02:06.656 CXX test/cpp_headers/stdinc.o 00:02:06.656 LINK spdk_trace_record 00:02:06.656 CXX test/cpp_headers/string.o 00:02:06.656 CXX test/cpp_headers/thread.o 00:02:06.656 CXX test/cpp_headers/trace.o 00:02:06.657 CXX test/cpp_headers/trace_parser.o 00:02:06.657 CXX test/cpp_headers/tree.o 00:02:06.657 CXX test/cpp_headers/ublk.o 00:02:06.657 CXX test/cpp_headers/util.o 00:02:06.657 CXX test/cpp_headers/uuid.o 00:02:06.657 CXX test/cpp_headers/version.o 00:02:06.657 CXX test/cpp_headers/vfio_user_pci.o 00:02:06.657 CXX test/cpp_headers/vfio_user_spec.o 00:02:06.657 CXX test/cpp_headers/vhost.o 00:02:06.657 CXX test/cpp_headers/vmd.o 00:02:06.657 CXX test/cpp_headers/xor.o 00:02:06.657 CXX test/cpp_headers/zipf.o 00:02:06.914 LINK spdk_dd 00:02:06.914 LINK iscsi_tgt 00:02:06.914 LINK vtophys 00:02:06.914 LINK histogram_perf 00:02:06.914 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:06.914 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:06.914 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:06.914 LINK env_dpdk_post_init 00:02:06.914 LINK stub 00:02:06.914 LINK verify 00:02:06.914 LINK ioat_perf 00:02:06.914 LINK bdev_svc 00:02:06.914 LINK pci_ut 00:02:07.173 LINK spdk_trace 00:02:07.173 LINK test_dma 00:02:07.173 LINK spdk_bdev 00:02:07.173 LINK nvme_fuzz 00:02:07.173 LINK spdk_nvme_identify 00:02:07.173 CC test/event/reactor/reactor.o 00:02:07.173 CC test/event/event_perf/event_perf.o 00:02:07.173 CC test/event/reactor_perf/reactor_perf.o 00:02:07.173 CC examples/sock/hello_world/hello_sock.o 00:02:07.173 CC examples/vmd/lsvmd/lsvmd.o 00:02:07.173 CC examples/idxd/perf/perf.o 00:02:07.173 CC examples/vmd/led/led.o 00:02:07.173 CC test/event/app_repeat/app_repeat.o 00:02:07.173 CC test/event/scheduler/scheduler.o 00:02:07.173 CC examples/thread/thread/thread_ex.o 00:02:07.173 LINK spdk_nvme_perf 00:02:07.173 LINK vhost_fuzz 00:02:07.431 LINK spdk_nvme 00:02:07.431 LINK mem_callbacks 00:02:07.431 LINK spdk_top 00:02:07.431 LINK led 00:02:07.431 LINK reactor 00:02:07.431 LINK event_perf 00:02:07.431 LINK lsvmd 00:02:07.431 LINK reactor_perf 00:02:07.431 CC app/vhost/vhost.o 00:02:07.431 LINK app_repeat 00:02:07.431 LINK hello_sock 00:02:07.431 LINK scheduler 00:02:07.431 LINK idxd_perf 00:02:07.689 LINK thread 00:02:07.689 CC test/nvme/overhead/overhead.o 00:02:07.689 CC test/nvme/e2edp/nvme_dp.o 00:02:07.689 CC test/nvme/simple_copy/simple_copy.o 00:02:07.689 CC test/nvme/connect_stress/connect_stress.o 00:02:07.689 CC test/nvme/err_injection/err_injection.o 00:02:07.689 CC test/nvme/fused_ordering/fused_ordering.o 00:02:07.689 CC test/blobfs/mkfs/mkfs.o 00:02:07.689 CC test/nvme/boot_partition/boot_partition.o 00:02:07.689 CC test/nvme/startup/startup.o 00:02:07.689 CC test/nvme/reset/reset.o 00:02:07.689 CC test/nvme/compliance/nvme_compliance.o 00:02:07.689 CC test/nvme/fdp/fdp.o 00:02:07.689 CC test/nvme/sgl/sgl.o 00:02:07.689 CC test/nvme/cuse/cuse.o 00:02:07.689 CC test/nvme/aer/aer.o 00:02:07.689 CC test/nvme/reserve/reserve.o 00:02:07.689 CC test/accel/dif/dif.o 00:02:07.689 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:07.689 LINK memory_ut 00:02:07.689 LINK vhost 00:02:07.689 CC test/lvol/esnap/esnap.o 00:02:07.689 LINK err_injection 00:02:07.689 LINK connect_stress 00:02:07.689 LINK boot_partition 00:02:07.689 LINK startup 00:02:07.689 LINK fused_ordering 00:02:07.689 LINK doorbell_aers 00:02:07.689 LINK simple_copy 00:02:07.689 LINK mkfs 00:02:07.947 LINK reserve 00:02:07.947 LINK nvme_dp 00:02:07.947 LINK reset 00:02:07.947 LINK overhead 00:02:07.947 LINK sgl 00:02:07.947 LINK aer 00:02:07.947 LINK nvme_compliance 00:02:07.947 LINK fdp 00:02:07.947 CC examples/nvme/hotplug/hotplug.o 00:02:07.947 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:07.947 CC examples/nvme/reconnect/reconnect.o 00:02:07.947 CC examples/nvme/abort/abort.o 00:02:07.947 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:07.947 CC examples/nvme/hello_world/hello_world.o 00:02:07.947 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:07.947 CC examples/nvme/arbitration/arbitration.o 00:02:07.947 LINK dif 00:02:07.947 CC examples/accel/perf/accel_perf.o 00:02:07.947 CC examples/blob/hello_world/hello_blob.o 00:02:07.947 CC examples/blob/cli/blobcli.o 00:02:08.205 LINK pmr_persistence 00:02:08.205 LINK cmb_copy 00:02:08.205 LINK hotplug 00:02:08.205 LINK hello_world 00:02:08.205 LINK arbitration 00:02:08.205 LINK abort 00:02:08.205 LINK reconnect 00:02:08.205 LINK iscsi_fuzz 00:02:08.205 LINK hello_blob 00:02:08.463 LINK nvme_manage 00:02:08.463 LINK accel_perf 00:02:08.463 LINK blobcli 00:02:08.463 CC test/bdev/bdevio/bdevio.o 00:02:08.721 LINK cuse 00:02:08.722 LINK bdevio 00:02:08.980 CC examples/bdev/bdevperf/bdevperf.o 00:02:08.980 CC examples/bdev/hello_world/hello_bdev.o 00:02:08.980 LINK hello_bdev 00:02:09.547 LINK bdevperf 00:02:09.805 CC examples/nvmf/nvmf/nvmf.o 00:02:10.371 LINK nvmf 00:02:10.938 LINK esnap 00:02:11.197 00:02:11.197 real 0m45.203s 00:02:11.197 user 6m45.894s 00:02:11.197 sys 3m31.131s 00:02:11.197 18:16:56 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:11.197 18:16:56 make -- common/autotest_common.sh@10 -- $ set +x 00:02:11.197 ************************************ 00:02:11.197 END TEST make 00:02:11.197 ************************************ 00:02:11.455 18:16:56 -- common/autotest_common.sh@1142 -- $ return 0 00:02:11.455 18:16:56 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:11.455 18:16:56 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:11.455 18:16:56 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:11.455 18:16:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.455 18:16:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:11.455 18:16:56 -- pm/common@44 -- $ pid=3624751 00:02:11.455 18:16:56 -- pm/common@50 -- $ kill -TERM 3624751 00:02:11.455 18:16:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.455 18:16:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:11.455 18:16:56 -- pm/common@44 -- $ pid=3624753 00:02:11.455 18:16:56 -- pm/common@50 -- $ kill -TERM 3624753 00:02:11.455 18:16:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.455 18:16:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:11.455 18:16:56 -- pm/common@44 -- $ pid=3624755 00:02:11.455 18:16:56 -- pm/common@50 -- $ kill -TERM 3624755 00:02:11.455 18:16:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.455 18:16:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:11.455 18:16:56 -- pm/common@44 -- $ pid=3624778 00:02:11.455 18:16:56 -- pm/common@50 -- $ sudo -E kill -TERM 3624778 00:02:11.455 18:16:56 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:11.455 18:16:56 -- nvmf/common.sh@7 -- # uname -s 00:02:11.455 18:16:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:11.455 18:16:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:11.455 18:16:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:11.455 18:16:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:11.455 18:16:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:11.455 18:16:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:11.456 18:16:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:11.456 18:16:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:11.456 18:16:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:11.456 18:16:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:11.456 18:16:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:02:11.456 18:16:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:02:11.456 18:16:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:11.456 18:16:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:11.456 18:16:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:11.456 18:16:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:11.456 18:16:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:11.456 18:16:56 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:11.456 18:16:56 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:11.456 18:16:56 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:11.456 18:16:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.456 18:16:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.456 18:16:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.456 18:16:56 -- paths/export.sh@5 -- # export PATH 00:02:11.456 18:16:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.456 18:16:56 -- nvmf/common.sh@47 -- # : 0 00:02:11.456 18:16:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:11.456 18:16:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:11.456 18:16:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:11.456 18:16:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:11.456 18:16:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:11.456 18:16:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:11.456 18:16:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:11.456 18:16:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:11.456 18:16:56 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:11.456 18:16:56 -- spdk/autotest.sh@32 -- # uname -s 00:02:11.456 18:16:56 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:11.456 18:16:56 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:11.456 18:16:56 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:11.456 18:16:56 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:11.456 18:16:56 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:11.456 18:16:56 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:11.456 18:16:56 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:11.456 18:16:56 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:11.456 18:16:56 -- spdk/autotest.sh@48 -- # udevadm_pid=3683982 00:02:11.456 18:16:56 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:11.456 18:16:56 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:11.456 18:16:56 -- pm/common@17 -- # local monitor 00:02:11.456 18:16:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.456 18:16:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.456 18:16:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.456 18:16:56 -- pm/common@21 -- # date +%s 00:02:11.456 18:16:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.456 18:16:56 -- pm/common@21 -- # date +%s 00:02:11.456 18:16:56 -- pm/common@25 -- # sleep 1 00:02:11.456 18:16:56 -- pm/common@21 -- # date +%s 00:02:11.456 18:16:56 -- pm/common@21 -- # date +%s 00:02:11.456 18:16:56 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721060216 00:02:11.456 18:16:56 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721060216 00:02:11.456 18:16:56 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721060216 00:02:11.456 18:16:56 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721060216 00:02:11.456 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721060216_collect-vmstat.pm.log 00:02:11.456 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721060216_collect-cpu-load.pm.log 00:02:11.456 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721060216_collect-cpu-temp.pm.log 00:02:11.456 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721060216_collect-bmc-pm.bmc.pm.log 00:02:12.391 18:16:57 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:12.391 18:16:57 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:12.391 18:16:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:12.391 18:16:57 -- common/autotest_common.sh@10 -- # set +x 00:02:12.650 18:16:57 -- spdk/autotest.sh@59 -- # create_test_list 00:02:12.650 18:16:57 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:12.650 18:16:57 -- common/autotest_common.sh@10 -- # set +x 00:02:12.650 18:16:57 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:12.650 18:16:57 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:12.650 18:16:58 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:12.650 18:16:58 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:12.650 18:16:58 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:12.650 18:16:58 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:12.650 18:16:58 -- common/autotest_common.sh@1455 -- # uname 00:02:12.650 18:16:58 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:12.650 18:16:58 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:12.650 18:16:58 -- common/autotest_common.sh@1475 -- # uname 00:02:12.650 18:16:58 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:12.650 18:16:58 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:12.650 18:16:58 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:12.650 18:16:58 -- spdk/autotest.sh@72 -- # hash lcov 00:02:12.650 18:16:58 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:12.650 18:16:58 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:12.650 --rc lcov_branch_coverage=1 00:02:12.650 --rc lcov_function_coverage=1 00:02:12.650 --rc genhtml_branch_coverage=1 00:02:12.650 --rc genhtml_function_coverage=1 00:02:12.650 --rc genhtml_legend=1 00:02:12.650 --rc geninfo_all_blocks=1 00:02:12.650 ' 00:02:12.650 18:16:58 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:12.650 --rc lcov_branch_coverage=1 00:02:12.650 --rc lcov_function_coverage=1 00:02:12.650 --rc genhtml_branch_coverage=1 00:02:12.650 --rc genhtml_function_coverage=1 00:02:12.650 --rc genhtml_legend=1 00:02:12.650 --rc geninfo_all_blocks=1 00:02:12.650 ' 00:02:12.650 18:16:58 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:12.650 --rc lcov_branch_coverage=1 00:02:12.650 --rc lcov_function_coverage=1 00:02:12.650 --rc genhtml_branch_coverage=1 00:02:12.650 --rc genhtml_function_coverage=1 00:02:12.650 --rc genhtml_legend=1 00:02:12.650 --rc geninfo_all_blocks=1 00:02:12.650 --no-external' 00:02:12.650 18:16:58 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:12.650 --rc lcov_branch_coverage=1 00:02:12.650 --rc lcov_function_coverage=1 00:02:12.650 --rc genhtml_branch_coverage=1 00:02:12.650 --rc genhtml_function_coverage=1 00:02:12.650 --rc genhtml_legend=1 00:02:12.650 --rc geninfo_all_blocks=1 00:02:12.650 --no-external' 00:02:12.650 18:16:58 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:12.650 lcov: LCOV version 1.14 00:02:12.650 18:16:58 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:14.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:14.057 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:14.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:14.057 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:14.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:14.057 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:14.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:14.057 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:14.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:14.057 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:14.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:14.057 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:14.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:14.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:14.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:14.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:14.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:14.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:14.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:14.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:14.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:14.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:14.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:14.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:14.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:14.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:14.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:14.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:14.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:14.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:14.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:14.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:14.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:14.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:14.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:14.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:14.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:14.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:14.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:14.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:14.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:14.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:14.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:14.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:14.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:14.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:14.346 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:14.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:14.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:14.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:14.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:14.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:14.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:14.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:14.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:14.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:14.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:14.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:14.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:14.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:14.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:14.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:14.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:14.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:14.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:14.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:14.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:14.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:14.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:14.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:14.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:14.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:14.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:14.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:14.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:14.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:14.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:14.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:14.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:14.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:14.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:14.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:14.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:14.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:14.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:14.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:14.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:14.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:14.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:14.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:14.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:14.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:14.347 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:14.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:14.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:14.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:14.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:14.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:14.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:14.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:14.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:26.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:26.810 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:39.010 18:17:22 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:39.010 18:17:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:39.010 18:17:22 -- common/autotest_common.sh@10 -- # set +x 00:02:39.010 18:17:22 -- spdk/autotest.sh@91 -- # rm -f 00:02:39.010 18:17:22 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:39.578 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:39.578 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:39.578 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:39.837 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:39.837 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:39.837 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:39.837 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:39.837 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:39.837 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:39.837 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:39.837 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:39.837 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:39.837 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:39.837 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:39.837 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:40.096 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:40.096 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:40.096 18:17:25 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:40.096 18:17:25 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:40.096 18:17:25 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:40.096 18:17:25 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:40.096 18:17:25 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:40.096 18:17:25 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:40.096 18:17:25 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:40.096 18:17:25 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:40.096 18:17:25 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:40.096 18:17:25 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:40.096 18:17:25 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:40.096 18:17:25 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:40.096 18:17:25 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:40.096 18:17:25 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:40.096 18:17:25 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:40.096 No valid GPT data, bailing 00:02:40.096 18:17:25 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:40.096 18:17:25 -- scripts/common.sh@391 -- # pt= 00:02:40.096 18:17:25 -- scripts/common.sh@392 -- # return 1 00:02:40.096 18:17:25 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:40.096 1+0 records in 00:02:40.096 1+0 records out 00:02:40.096 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00190366 s, 551 MB/s 00:02:40.096 18:17:25 -- spdk/autotest.sh@118 -- # sync 00:02:40.096 18:17:25 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:40.096 18:17:25 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:40.096 18:17:25 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:45.377 18:17:30 -- spdk/autotest.sh@124 -- # uname -s 00:02:45.377 18:17:30 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:45.377 18:17:30 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:45.377 18:17:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:45.377 18:17:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:45.377 18:17:30 -- common/autotest_common.sh@10 -- # set +x 00:02:45.377 ************************************ 00:02:45.377 START TEST setup.sh 00:02:45.377 ************************************ 00:02:45.377 18:17:30 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:45.377 * Looking for test storage... 00:02:45.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:45.377 18:17:30 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:45.377 18:17:30 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:45.377 18:17:30 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:45.377 18:17:30 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:45.377 18:17:30 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:45.377 18:17:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:45.377 ************************************ 00:02:45.377 START TEST acl 00:02:45.377 ************************************ 00:02:45.377 18:17:30 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:45.635 * Looking for test storage... 00:02:45.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:45.635 18:17:30 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:45.635 18:17:30 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:45.635 18:17:30 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:45.635 18:17:30 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:45.635 18:17:30 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:45.635 18:17:30 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:45.635 18:17:30 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:45.635 18:17:30 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:45.635 18:17:30 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:45.635 18:17:30 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:45.635 18:17:30 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:45.635 18:17:30 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:45.635 18:17:30 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:45.635 18:17:30 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:45.635 18:17:30 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:45.635 18:17:30 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:48.921 18:17:34 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:48.921 18:17:34 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:48.921 18:17:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:48.921 18:17:34 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:48.921 18:17:34 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:48.921 18:17:34 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:51.456 Hugepages 00:02:51.456 node hugesize free / total 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.456 00:02:51.456 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.456 18:17:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:51.456 18:17:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.456 18:17:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.456 18:17:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.456 18:17:37 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:51.456 18:17:37 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:51.715 18:17:37 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:51.715 18:17:37 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:51.715 18:17:37 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:51.715 ************************************ 00:02:51.715 START TEST denied 00:02:51.715 ************************************ 00:02:51.715 18:17:37 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:51.715 18:17:37 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:02:51.715 18:17:37 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:51.715 18:17:37 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:02:51.715 18:17:37 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:51.715 18:17:37 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:55.002 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:02:55.002 18:17:40 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:02:55.002 18:17:40 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:55.002 18:17:40 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:55.002 18:17:40 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:02:55.002 18:17:40 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:02:55.002 18:17:40 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:55.002 18:17:40 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:55.002 18:17:40 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:55.002 18:17:40 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:55.002 18:17:40 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:59.195 00:02:59.195 real 0m7.159s 00:02:59.195 user 0m2.323s 00:02:59.195 sys 0m4.108s 00:02:59.195 18:17:44 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:59.195 18:17:44 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:59.195 ************************************ 00:02:59.195 END TEST denied 00:02:59.195 ************************************ 00:02:59.195 18:17:44 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:59.195 18:17:44 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:59.195 18:17:44 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:59.195 18:17:44 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:59.195 18:17:44 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:59.195 ************************************ 00:02:59.195 START TEST allowed 00:02:59.195 ************************************ 00:02:59.195 18:17:44 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:02:59.195 18:17:44 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:02:59.195 18:17:44 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:59.195 18:17:44 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:02:59.195 18:17:44 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:59.195 18:17:44 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:03.386 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:03.386 18:17:48 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:03.386 18:17:48 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:03.386 18:17:48 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:03.386 18:17:48 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:03.386 18:17:48 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:06.694 00:03:06.694 real 0m7.562s 00:03:06.694 user 0m2.297s 00:03:06.694 sys 0m3.923s 00:03:06.694 18:17:51 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:06.694 18:17:51 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:06.694 ************************************ 00:03:06.694 END TEST allowed 00:03:06.694 ************************************ 00:03:06.694 18:17:51 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:06.694 00:03:06.694 real 0m21.001s 00:03:06.694 user 0m6.997s 00:03:06.694 sys 0m12.151s 00:03:06.694 18:17:51 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:06.694 18:17:51 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:06.694 ************************************ 00:03:06.694 END TEST acl 00:03:06.694 ************************************ 00:03:06.694 18:17:51 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:06.694 18:17:51 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:06.694 18:17:51 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:06.694 18:17:51 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:06.694 18:17:51 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:06.694 ************************************ 00:03:06.694 START TEST hugepages 00:03:06.694 ************************************ 00:03:06.694 18:17:51 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:06.694 * Looking for test storage... 00:03:06.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 174470408 kB' 'MemAvailable: 177271588 kB' 'Buffers: 4132 kB' 'Cached: 9112472 kB' 'SwapCached: 0 kB' 'Active: 6172108 kB' 'Inactive: 3450620 kB' 'Active(anon): 5785528 kB' 'Inactive(anon): 0 kB' 'Active(file): 386580 kB' 'Inactive(file): 3450620 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 509520 kB' 'Mapped: 198332 kB' 'Shmem: 5279404 kB' 'KReclaimable: 216864 kB' 'Slab: 731664 kB' 'SReclaimable: 216864 kB' 'SUnreclaim: 514800 kB' 'KernelStack: 20656 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982040 kB' 'Committed_AS: 7286068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315284 kB' 'VmallocChunk: 0 kB' 'Percpu: 65664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2036692 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 181403648 kB' 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.694 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.695 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:06.696 18:17:52 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:06.696 18:17:52 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:06.696 18:17:52 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:06.696 18:17:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:06.696 ************************************ 00:03:06.696 START TEST default_setup 00:03:06.696 ************************************ 00:03:06.696 18:17:52 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:06.696 18:17:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:06.696 18:17:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:06.696 18:17:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:06.696 18:17:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:06.696 18:17:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:06.696 18:17:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:06.696 18:17:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:06.696 18:17:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:06.696 18:17:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:06.696 18:17:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:06.696 18:17:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:06.696 18:17:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:06.696 18:17:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:06.696 18:17:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:06.696 18:17:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:06.696 18:17:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:06.696 18:17:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:06.696 18:17:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:06.696 18:17:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:06.696 18:17:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:06.696 18:17:52 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.696 18:17:52 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:10.028 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:10.028 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:10.028 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:10.028 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:10.028 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:10.028 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:10.028 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:10.028 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:10.028 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:10.028 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:10.028 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:10.028 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:10.028 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:10.028 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:10.028 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:10.028 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:10.963 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:11.232 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:11.232 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:11.232 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:11.232 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:11.232 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:11.232 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:11.232 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:11.232 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:11.232 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:11.232 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:11.232 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:11.232 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:11.232 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:11.232 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.232 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.232 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.232 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.232 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.232 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.232 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176640788 kB' 'MemAvailable: 179441820 kB' 'Buffers: 4132 kB' 'Cached: 9112584 kB' 'SwapCached: 0 kB' 'Active: 6190332 kB' 'Inactive: 3450620 kB' 'Active(anon): 5803752 kB' 'Inactive(anon): 0 kB' 'Active(file): 386580 kB' 'Inactive(file): 3450620 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527556 kB' 'Mapped: 198208 kB' 'Shmem: 5279516 kB' 'KReclaimable: 216564 kB' 'Slab: 729576 kB' 'SReclaimable: 216564 kB' 'SUnreclaim: 513012 kB' 'KernelStack: 20976 kB' 'PageTables: 9188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7305816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315556 kB' 'VmallocChunk: 0 kB' 'Percpu: 65664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2036692 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 181403648 kB' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:11.233 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176637876 kB' 'MemAvailable: 179438908 kB' 'Buffers: 4132 kB' 'Cached: 9112588 kB' 'SwapCached: 0 kB' 'Active: 6189476 kB' 'Inactive: 3450620 kB' 'Active(anon): 5802896 kB' 'Inactive(anon): 0 kB' 'Active(file): 386580 kB' 'Inactive(file): 3450620 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526656 kB' 'Mapped: 198208 kB' 'Shmem: 5279520 kB' 'KReclaimable: 216564 kB' 'Slab: 729616 kB' 'SReclaimable: 216564 kB' 'SUnreclaim: 513052 kB' 'KernelStack: 20768 kB' 'PageTables: 8564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7305836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315508 kB' 'VmallocChunk: 0 kB' 'Percpu: 65664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2036692 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 181403648 kB' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.234 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176637760 kB' 'MemAvailable: 179438792 kB' 'Buffers: 4132 kB' 'Cached: 9112604 kB' 'SwapCached: 0 kB' 'Active: 6189988 kB' 'Inactive: 3450620 kB' 'Active(anon): 5803408 kB' 'Inactive(anon): 0 kB' 'Active(file): 386580 kB' 'Inactive(file): 3450620 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527168 kB' 'Mapped: 198208 kB' 'Shmem: 5279536 kB' 'KReclaimable: 216564 kB' 'Slab: 729664 kB' 'SReclaimable: 216564 kB' 'SUnreclaim: 513100 kB' 'KernelStack: 20880 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7304364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315572 kB' 'VmallocChunk: 0 kB' 'Percpu: 65664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2036692 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 181403648 kB' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.235 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:11.236 nr_hugepages=1024 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:11.236 resv_hugepages=0 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:11.236 surplus_hugepages=0 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:11.236 anon_hugepages=0 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176637808 kB' 'MemAvailable: 179438840 kB' 'Buffers: 4132 kB' 'Cached: 9112628 kB' 'SwapCached: 0 kB' 'Active: 6189560 kB' 'Inactive: 3450620 kB' 'Active(anon): 5802980 kB' 'Inactive(anon): 0 kB' 'Active(file): 386580 kB' 'Inactive(file): 3450620 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526712 kB' 'Mapped: 198208 kB' 'Shmem: 5279560 kB' 'KReclaimable: 216564 kB' 'Slab: 729632 kB' 'SReclaimable: 216564 kB' 'SUnreclaim: 513068 kB' 'KernelStack: 20816 kB' 'PageTables: 8820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7305880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315556 kB' 'VmallocChunk: 0 kB' 'Percpu: 65664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2036692 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 181403648 kB' 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.236 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.237 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 92507284 kB' 'MemUsed: 5155400 kB' 'SwapCached: 0 kB' 'Active: 2000044 kB' 'Inactive: 90556 kB' 'Active(anon): 1784448 kB' 'Inactive(anon): 0 kB' 'Active(file): 215596 kB' 'Inactive(file): 90556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1697372 kB' 'Mapped: 129952 kB' 'AnonPages: 396352 kB' 'Shmem: 1391220 kB' 'KernelStack: 12984 kB' 'PageTables: 6212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104276 kB' 'Slab: 334560 kB' 'SReclaimable: 104276 kB' 'SUnreclaim: 230284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.238 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.239 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.239 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.239 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.239 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.239 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.239 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.239 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.239 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.239 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.239 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.239 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.239 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.239 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.239 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.239 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.239 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.239 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.239 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.239 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.239 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.239 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.239 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:11.239 18:17:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:11.239 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:11.239 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:11.239 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:11.239 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:11.239 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:11.239 node0=1024 expecting 1024 00:03:11.239 18:17:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:11.239 00:03:11.239 real 0m4.589s 00:03:11.239 user 0m1.281s 00:03:11.239 sys 0m1.982s 00:03:11.239 18:17:56 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:11.239 18:17:56 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:11.239 ************************************ 00:03:11.239 END TEST default_setup 00:03:11.239 ************************************ 00:03:11.239 18:17:56 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:11.239 18:17:56 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:11.239 18:17:56 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:11.239 18:17:56 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:11.239 18:17:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:11.497 ************************************ 00:03:11.497 START TEST per_node_1G_alloc 00:03:11.497 ************************************ 00:03:11.497 18:17:56 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:11.497 18:17:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:11.497 18:17:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:11.497 18:17:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:11.497 18:17:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:11.497 18:17:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:11.497 18:17:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:11.497 18:17:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:11.497 18:17:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:11.497 18:17:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:11.497 18:17:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:11.497 18:17:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:11.497 18:17:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:11.497 18:17:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:11.497 18:17:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:11.497 18:17:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:11.497 18:17:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:11.497 18:17:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:11.497 18:17:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:11.497 18:17:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:11.497 18:17:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:11.497 18:17:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:11.497 18:17:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:11.497 18:17:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:11.497 18:17:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:11.497 18:17:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:11.497 18:17:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.497 18:17:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:14.030 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:14.030 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:14.030 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:14.030 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:14.030 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:14.030 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:14.030 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:14.030 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:14.030 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:14.030 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:14.030 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:14.030 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:14.030 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:14.030 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:14.030 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:14.030 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:14.030 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176632880 kB' 'MemAvailable: 179433912 kB' 'Buffers: 4132 kB' 'Cached: 9112716 kB' 'SwapCached: 0 kB' 'Active: 6189920 kB' 'Inactive: 3450620 kB' 'Active(anon): 5803340 kB' 'Inactive(anon): 0 kB' 'Active(file): 386580 kB' 'Inactive(file): 3450620 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526820 kB' 'Mapped: 198332 kB' 'Shmem: 5279648 kB' 'KReclaimable: 216564 kB' 'Slab: 728996 kB' 'SReclaimable: 216564 kB' 'SUnreclaim: 512432 kB' 'KernelStack: 20720 kB' 'PageTables: 8704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7303588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315540 kB' 'VmallocChunk: 0 kB' 'Percpu: 65664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2036692 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 181403648 kB' 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.295 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.296 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176633088 kB' 'MemAvailable: 179434120 kB' 'Buffers: 4132 kB' 'Cached: 9112716 kB' 'SwapCached: 0 kB' 'Active: 6189880 kB' 'Inactive: 3450620 kB' 'Active(anon): 5803300 kB' 'Inactive(anon): 0 kB' 'Active(file): 386580 kB' 'Inactive(file): 3450620 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526888 kB' 'Mapped: 198216 kB' 'Shmem: 5279648 kB' 'KReclaimable: 216564 kB' 'Slab: 728960 kB' 'SReclaimable: 216564 kB' 'SUnreclaim: 512396 kB' 'KernelStack: 20784 kB' 'PageTables: 8748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7303404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315492 kB' 'VmallocChunk: 0 kB' 'Percpu: 65664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2036692 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 181403648 kB' 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.297 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.298 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176638948 kB' 'MemAvailable: 179439980 kB' 'Buffers: 4132 kB' 'Cached: 9112732 kB' 'SwapCached: 0 kB' 'Active: 6187956 kB' 'Inactive: 3450620 kB' 'Active(anon): 5801376 kB' 'Inactive(anon): 0 kB' 'Active(file): 386580 kB' 'Inactive(file): 3450620 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524980 kB' 'Mapped: 197084 kB' 'Shmem: 5279664 kB' 'KReclaimable: 216564 kB' 'Slab: 728940 kB' 'SReclaimable: 216564 kB' 'SUnreclaim: 512376 kB' 'KernelStack: 20720 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7292884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315396 kB' 'VmallocChunk: 0 kB' 'Percpu: 65664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2036692 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 181403648 kB' 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.299 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.300 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:14.301 nr_hugepages=1024 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:14.301 resv_hugepages=0 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:14.301 surplus_hugepages=0 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:14.301 anon_hugepages=0 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176638548 kB' 'MemAvailable: 179439580 kB' 'Buffers: 4132 kB' 'Cached: 9112756 kB' 'SwapCached: 0 kB' 'Active: 6187068 kB' 'Inactive: 3450620 kB' 'Active(anon): 5800488 kB' 'Inactive(anon): 0 kB' 'Active(file): 386580 kB' 'Inactive(file): 3450620 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524032 kB' 'Mapped: 197084 kB' 'Shmem: 5279688 kB' 'KReclaimable: 216564 kB' 'Slab: 728920 kB' 'SReclaimable: 216564 kB' 'SUnreclaim: 512356 kB' 'KernelStack: 20704 kB' 'PageTables: 8448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7292908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315380 kB' 'VmallocChunk: 0 kB' 'Percpu: 65664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2036692 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 181403648 kB' 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.301 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.302 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 93554904 kB' 'MemUsed: 4107780 kB' 'SwapCached: 0 kB' 'Active: 1998728 kB' 'Inactive: 90556 kB' 'Active(anon): 1783132 kB' 'Inactive(anon): 0 kB' 'Active(file): 215596 kB' 'Inactive(file): 90556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1697380 kB' 'Mapped: 128472 kB' 'AnonPages: 395020 kB' 'Shmem: 1391228 kB' 'KernelStack: 12648 kB' 'PageTables: 5516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104276 kB' 'Slab: 334268 kB' 'SReclaimable: 104276 kB' 'SUnreclaim: 229992 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.303 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718496 kB' 'MemFree: 83083852 kB' 'MemUsed: 10634644 kB' 'SwapCached: 0 kB' 'Active: 4188600 kB' 'Inactive: 3360064 kB' 'Active(anon): 4017616 kB' 'Inactive(anon): 0 kB' 'Active(file): 170984 kB' 'Inactive(file): 3360064 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7419556 kB' 'Mapped: 68620 kB' 'AnonPages: 129276 kB' 'Shmem: 3888508 kB' 'KernelStack: 7992 kB' 'PageTables: 2852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 112288 kB' 'Slab: 394652 kB' 'SReclaimable: 112288 kB' 'SUnreclaim: 282364 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.304 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:14.305 node0=512 expecting 512 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:14.305 node1=512 expecting 512 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:14.305 00:03:14.305 real 0m3.053s 00:03:14.305 user 0m1.308s 00:03:14.305 sys 0m1.815s 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:14.305 18:17:59 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:14.305 ************************************ 00:03:14.305 END TEST per_node_1G_alloc 00:03:14.305 ************************************ 00:03:14.564 18:17:59 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:14.564 18:17:59 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:14.564 18:17:59 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:14.564 18:17:59 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:14.564 18:17:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:14.564 ************************************ 00:03:14.564 START TEST even_2G_alloc 00:03:14.564 ************************************ 00:03:14.564 18:17:59 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:14.564 18:17:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:14.564 18:17:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:14.564 18:17:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:14.564 18:17:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:14.564 18:17:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:14.564 18:17:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:14.564 18:17:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:14.564 18:17:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:14.565 18:17:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:14.565 18:17:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:14.565 18:17:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:14.565 18:17:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:14.565 18:17:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:14.565 18:17:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:14.565 18:17:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:14.565 18:17:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:14.565 18:17:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:14.565 18:17:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:14.565 18:17:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:14.565 18:17:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:14.565 18:17:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:14.565 18:17:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:14.565 18:17:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:14.565 18:17:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:14.565 18:17:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:14.565 18:17:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:14.565 18:17:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.565 18:17:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:17.096 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:17.096 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:17.096 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:17.096 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:17.096 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:17.096 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:17.096 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:17.096 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:17.096 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:17.096 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:17.096 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:17.096 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:17.096 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:17.096 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:17.096 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:17.096 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:17.096 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176563216 kB' 'MemAvailable: 179364232 kB' 'Buffers: 4132 kB' 'Cached: 9112872 kB' 'SwapCached: 0 kB' 'Active: 6193820 kB' 'Inactive: 3450620 kB' 'Active(anon): 5807240 kB' 'Inactive(anon): 0 kB' 'Active(file): 386580 kB' 'Inactive(file): 3450620 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529776 kB' 'Mapped: 198020 kB' 'Shmem: 5279804 kB' 'KReclaimable: 216532 kB' 'Slab: 728620 kB' 'SReclaimable: 216532 kB' 'SUnreclaim: 512088 kB' 'KernelStack: 20736 kB' 'PageTables: 8708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7302920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315528 kB' 'VmallocChunk: 0 kB' 'Percpu: 65664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2036692 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 181403648 kB' 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.360 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176563276 kB' 'MemAvailable: 179364292 kB' 'Buffers: 4132 kB' 'Cached: 9112876 kB' 'SwapCached: 0 kB' 'Active: 6193120 kB' 'Inactive: 3450620 kB' 'Active(anon): 5806540 kB' 'Inactive(anon): 0 kB' 'Active(file): 386580 kB' 'Inactive(file): 3450620 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530028 kB' 'Mapped: 198020 kB' 'Shmem: 5279808 kB' 'KReclaimable: 216532 kB' 'Slab: 728612 kB' 'SReclaimable: 216532 kB' 'SUnreclaim: 512080 kB' 'KernelStack: 20736 kB' 'PageTables: 8696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7302936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315496 kB' 'VmallocChunk: 0 kB' 'Percpu: 65664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2036692 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 181403648 kB' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.361 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176563276 kB' 'MemAvailable: 179364292 kB' 'Buffers: 4132 kB' 'Cached: 9112876 kB' 'SwapCached: 0 kB' 'Active: 6193152 kB' 'Inactive: 3450620 kB' 'Active(anon): 5806572 kB' 'Inactive(anon): 0 kB' 'Active(file): 386580 kB' 'Inactive(file): 3450620 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530084 kB' 'Mapped: 198020 kB' 'Shmem: 5279808 kB' 'KReclaimable: 216532 kB' 'Slab: 728612 kB' 'SReclaimable: 216532 kB' 'SUnreclaim: 512080 kB' 'KernelStack: 20752 kB' 'PageTables: 8744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7302956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315496 kB' 'VmallocChunk: 0 kB' 'Percpu: 65664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2036692 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 181403648 kB' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.362 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.363 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:17.364 nr_hugepages=1024 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:17.364 resv_hugepages=0 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:17.364 surplus_hugepages=0 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:17.364 anon_hugepages=0 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176564820 kB' 'MemAvailable: 179365836 kB' 'Buffers: 4132 kB' 'Cached: 9112876 kB' 'SwapCached: 0 kB' 'Active: 6193316 kB' 'Inactive: 3450620 kB' 'Active(anon): 5806736 kB' 'Inactive(anon): 0 kB' 'Active(file): 386580 kB' 'Inactive(file): 3450620 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530240 kB' 'Mapped: 198020 kB' 'Shmem: 5279808 kB' 'KReclaimable: 216532 kB' 'Slab: 728612 kB' 'SReclaimable: 216532 kB' 'SUnreclaim: 512080 kB' 'KernelStack: 20736 kB' 'PageTables: 8700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7302980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315496 kB' 'VmallocChunk: 0 kB' 'Percpu: 65664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2036692 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 181403648 kB' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.364 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 93500804 kB' 'MemUsed: 4161880 kB' 'SwapCached: 0 kB' 'Active: 2005564 kB' 'Inactive: 90556 kB' 'Active(anon): 1789968 kB' 'Inactive(anon): 0 kB' 'Active(file): 215596 kB' 'Inactive(file): 90556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1697412 kB' 'Mapped: 129388 kB' 'AnonPages: 401896 kB' 'Shmem: 1391260 kB' 'KernelStack: 12712 kB' 'PageTables: 5728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104244 kB' 'Slab: 334176 kB' 'SReclaimable: 104244 kB' 'SUnreclaim: 229932 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.365 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718496 kB' 'MemFree: 83063536 kB' 'MemUsed: 10654960 kB' 'SwapCached: 0 kB' 'Active: 4187656 kB' 'Inactive: 3360064 kB' 'Active(anon): 4016672 kB' 'Inactive(anon): 0 kB' 'Active(file): 170984 kB' 'Inactive(file): 3360064 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7419680 kB' 'Mapped: 68632 kB' 'AnonPages: 128140 kB' 'Shmem: 3888632 kB' 'KernelStack: 8024 kB' 'PageTables: 2968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 112288 kB' 'Slab: 394436 kB' 'SReclaimable: 112288 kB' 'SUnreclaim: 282148 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.366 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.367 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.626 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:17.626 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:17.626 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:17.626 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:17.626 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:17.626 node0=512 expecting 512 00:03:17.626 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:17.626 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:17.626 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:17.626 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:17.626 node1=512 expecting 512 00:03:17.626 18:18:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:17.626 00:03:17.626 real 0m3.003s 00:03:17.626 user 0m1.209s 00:03:17.626 sys 0m1.861s 00:03:17.626 18:18:02 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:17.626 18:18:02 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:17.626 ************************************ 00:03:17.626 END TEST even_2G_alloc 00:03:17.626 ************************************ 00:03:17.626 18:18:02 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:17.626 18:18:02 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:17.626 18:18:02 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:17.626 18:18:02 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:17.626 18:18:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:17.626 ************************************ 00:03:17.626 START TEST odd_alloc 00:03:17.626 ************************************ 00:03:17.626 18:18:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:17.626 18:18:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:17.626 18:18:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:17.626 18:18:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:17.626 18:18:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:17.626 18:18:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:17.626 18:18:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:17.626 18:18:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:17.626 18:18:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:17.626 18:18:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:17.626 18:18:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:17.626 18:18:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:17.626 18:18:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:17.626 18:18:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:17.626 18:18:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:17.626 18:18:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:17.626 18:18:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:17.626 18:18:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:17.626 18:18:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:17.626 18:18:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:17.626 18:18:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:17.626 18:18:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:17.626 18:18:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:17.626 18:18:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:17.626 18:18:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:17.626 18:18:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:17.626 18:18:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:17.626 18:18:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.626 18:18:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:20.160 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:20.160 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:20.160 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:20.160 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:20.160 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:20.160 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:20.160 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:20.160 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:20.160 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:20.160 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:20.160 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:20.160 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:20.160 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:20.160 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:20.160 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:20.160 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:20.160 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:20.425 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:20.425 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:20.425 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:20.425 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:20.425 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:20.425 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:20.425 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:20.425 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:20.425 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:20.425 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:20.425 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:20.425 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:20.425 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.425 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.425 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.425 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.425 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.425 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.425 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.425 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176553784 kB' 'MemAvailable: 179354800 kB' 'Buffers: 4132 kB' 'Cached: 9113028 kB' 'SwapCached: 0 kB' 'Active: 6195020 kB' 'Inactive: 3450620 kB' 'Active(anon): 5808440 kB' 'Inactive(anon): 0 kB' 'Active(file): 386580 kB' 'Inactive(file): 3450620 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531256 kB' 'Mapped: 198104 kB' 'Shmem: 5279960 kB' 'KReclaimable: 216532 kB' 'Slab: 728804 kB' 'SReclaimable: 216532 kB' 'SUnreclaim: 512272 kB' 'KernelStack: 20752 kB' 'PageTables: 8780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029592 kB' 'Committed_AS: 7303748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315576 kB' 'VmallocChunk: 0 kB' 'Percpu: 65664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2036692 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 181403648 kB' 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.426 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176553604 kB' 'MemAvailable: 179354620 kB' 'Buffers: 4132 kB' 'Cached: 9113032 kB' 'SwapCached: 0 kB' 'Active: 6193808 kB' 'Inactive: 3450620 kB' 'Active(anon): 5807228 kB' 'Inactive(anon): 0 kB' 'Active(file): 386580 kB' 'Inactive(file): 3450620 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530512 kB' 'Mapped: 198028 kB' 'Shmem: 5279964 kB' 'KReclaimable: 216532 kB' 'Slab: 728788 kB' 'SReclaimable: 216532 kB' 'SUnreclaim: 512256 kB' 'KernelStack: 20736 kB' 'PageTables: 8696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029592 kB' 'Committed_AS: 7303768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315560 kB' 'VmallocChunk: 0 kB' 'Percpu: 65664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2036692 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 181403648 kB' 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.427 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.428 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.429 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176554320 kB' 'MemAvailable: 179355336 kB' 'Buffers: 4132 kB' 'Cached: 9113048 kB' 'SwapCached: 0 kB' 'Active: 6193824 kB' 'Inactive: 3450620 kB' 'Active(anon): 5807244 kB' 'Inactive(anon): 0 kB' 'Active(file): 386580 kB' 'Inactive(file): 3450620 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530508 kB' 'Mapped: 198028 kB' 'Shmem: 5279980 kB' 'KReclaimable: 216532 kB' 'Slab: 728788 kB' 'SReclaimable: 216532 kB' 'SUnreclaim: 512256 kB' 'KernelStack: 20736 kB' 'PageTables: 8696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029592 kB' 'Committed_AS: 7303788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315560 kB' 'VmallocChunk: 0 kB' 'Percpu: 65664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2036692 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 181403648 kB' 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.430 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.431 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:20.432 nr_hugepages=1025 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:20.432 resv_hugepages=0 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:20.432 surplus_hugepages=0 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:20.432 anon_hugepages=0 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176554320 kB' 'MemAvailable: 179355336 kB' 'Buffers: 4132 kB' 'Cached: 9113088 kB' 'SwapCached: 0 kB' 'Active: 6193848 kB' 'Inactive: 3450620 kB' 'Active(anon): 5807268 kB' 'Inactive(anon): 0 kB' 'Active(file): 386580 kB' 'Inactive(file): 3450620 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530476 kB' 'Mapped: 198028 kB' 'Shmem: 5280020 kB' 'KReclaimable: 216532 kB' 'Slab: 728788 kB' 'SReclaimable: 216532 kB' 'SUnreclaim: 512256 kB' 'KernelStack: 20736 kB' 'PageTables: 8692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029592 kB' 'Committed_AS: 7303808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315560 kB' 'VmallocChunk: 0 kB' 'Percpu: 65664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2036692 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 181403648 kB' 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.432 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.433 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.434 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 93499424 kB' 'MemUsed: 4163260 kB' 'SwapCached: 0 kB' 'Active: 2004848 kB' 'Inactive: 90556 kB' 'Active(anon): 1789252 kB' 'Inactive(anon): 0 kB' 'Active(file): 215596 kB' 'Inactive(file): 90556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1697428 kB' 'Mapped: 129388 kB' 'AnonPages: 401084 kB' 'Shmem: 1391276 kB' 'KernelStack: 12712 kB' 'PageTables: 5680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104244 kB' 'Slab: 334524 kB' 'SReclaimable: 104244 kB' 'SUnreclaim: 230280 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:20.696 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718496 kB' 'MemFree: 83055716 kB' 'MemUsed: 10662780 kB' 'SwapCached: 0 kB' 'Active: 4189064 kB' 'Inactive: 3360064 kB' 'Active(anon): 4018080 kB' 'Inactive(anon): 0 kB' 'Active(file): 170984 kB' 'Inactive(file): 3360064 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7419816 kB' 'Mapped: 68640 kB' 'AnonPages: 129432 kB' 'Shmem: 3888768 kB' 'KernelStack: 8024 kB' 'PageTables: 3016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 112288 kB' 'Slab: 394264 kB' 'SReclaimable: 112288 kB' 'SUnreclaim: 281976 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 18:18:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.697 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:20.698 node0=512 expecting 513 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:20.698 node1=513 expecting 512 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:20.698 00:03:20.698 real 0m3.036s 00:03:20.698 user 0m1.245s 00:03:20.698 sys 0m1.859s 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:20.698 18:18:06 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:20.698 ************************************ 00:03:20.698 END TEST odd_alloc 00:03:20.698 ************************************ 00:03:20.698 18:18:06 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:20.698 18:18:06 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:20.698 18:18:06 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:20.698 18:18:06 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:20.698 18:18:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:20.698 ************************************ 00:03:20.698 START TEST custom_alloc 00:03:20.698 ************************************ 00:03:20.698 18:18:06 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:20.698 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:20.698 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:20.698 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:20.698 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:20.698 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:20.698 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:20.698 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:20.698 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:20.698 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:20.698 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:20.698 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:20.698 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:20.698 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.698 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:20.698 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.698 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.698 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.698 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:20.698 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:20.698 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.698 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:20.698 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:20.698 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:20.698 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.698 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:20.698 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:20.698 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:20.698 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.698 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:20.698 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.699 18:18:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:23.230 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:23.230 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:23.230 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:23.230 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:23.230 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:23.230 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:23.230 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:23.230 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:23.491 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:23.491 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:23.491 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:23.491 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:23.491 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:23.491 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:23.491 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:23.491 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:23.491 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:23.491 18:18:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:23.491 18:18:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:23.491 18:18:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:23.491 18:18:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:23.491 18:18:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:23.491 18:18:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:23.491 18:18:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:23.491 18:18:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:23.491 18:18:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:23.491 18:18:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:23.491 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:23.491 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:23.491 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:23.491 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.491 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.491 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.491 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.491 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.491 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.491 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 175545644 kB' 'MemAvailable: 178346656 kB' 'Buffers: 4132 kB' 'Cached: 9113184 kB' 'SwapCached: 0 kB' 'Active: 6195032 kB' 'Inactive: 3450620 kB' 'Active(anon): 5808452 kB' 'Inactive(anon): 0 kB' 'Active(file): 386580 kB' 'Inactive(file): 3450620 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531700 kB' 'Mapped: 197976 kB' 'Shmem: 5280116 kB' 'KReclaimable: 216524 kB' 'Slab: 728504 kB' 'SReclaimable: 216524 kB' 'SUnreclaim: 511980 kB' 'KernelStack: 20768 kB' 'PageTables: 8824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506328 kB' 'Committed_AS: 7304600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315608 kB' 'VmallocChunk: 0 kB' 'Percpu: 65664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2036692 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 181403648 kB' 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.492 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 175546180 kB' 'MemAvailable: 178347192 kB' 'Buffers: 4132 kB' 'Cached: 9113188 kB' 'SwapCached: 0 kB' 'Active: 6195012 kB' 'Inactive: 3450620 kB' 'Active(anon): 5808432 kB' 'Inactive(anon): 0 kB' 'Active(file): 386580 kB' 'Inactive(file): 3450620 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531624 kB' 'Mapped: 197912 kB' 'Shmem: 5280120 kB' 'KReclaimable: 216524 kB' 'Slab: 728516 kB' 'SReclaimable: 216524 kB' 'SUnreclaim: 511992 kB' 'KernelStack: 20752 kB' 'PageTables: 8744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506328 kB' 'Committed_AS: 7304620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315560 kB' 'VmallocChunk: 0 kB' 'Percpu: 65664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2036692 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 181403648 kB' 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.493 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.494 18:18:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.494 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.494 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.494 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.494 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.494 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.494 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.494 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.494 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.494 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.494 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.494 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.494 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 175546612 kB' 'MemAvailable: 178347624 kB' 'Buffers: 4132 kB' 'Cached: 9113204 kB' 'SwapCached: 0 kB' 'Active: 6195004 kB' 'Inactive: 3450620 kB' 'Active(anon): 5808424 kB' 'Inactive(anon): 0 kB' 'Active(file): 386580 kB' 'Inactive(file): 3450620 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531636 kB' 'Mapped: 197912 kB' 'Shmem: 5280136 kB' 'KReclaimable: 216524 kB' 'Slab: 728556 kB' 'SReclaimable: 216524 kB' 'SUnreclaim: 512032 kB' 'KernelStack: 20752 kB' 'PageTables: 8768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506328 kB' 'Committed_AS: 7304640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315560 kB' 'VmallocChunk: 0 kB' 'Percpu: 65664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2036692 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 181403648 kB' 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.495 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.496 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:23.497 nr_hugepages=1536 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:23.497 resv_hugepages=0 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:23.497 surplus_hugepages=0 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:23.497 anon_hugepages=0 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:23.497 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:23.758 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:23.758 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:23.758 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.758 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.758 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.758 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.758 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.758 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.758 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.758 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.758 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 175544996 kB' 'MemAvailable: 178346008 kB' 'Buffers: 4132 kB' 'Cached: 9113228 kB' 'SwapCached: 0 kB' 'Active: 6195076 kB' 'Inactive: 3450620 kB' 'Active(anon): 5808496 kB' 'Inactive(anon): 0 kB' 'Active(file): 386580 kB' 'Inactive(file): 3450620 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531700 kB' 'Mapped: 197912 kB' 'Shmem: 5280160 kB' 'KReclaimable: 216524 kB' 'Slab: 728556 kB' 'SReclaimable: 216524 kB' 'SUnreclaim: 512032 kB' 'KernelStack: 20704 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506328 kB' 'Committed_AS: 7305944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315512 kB' 'VmallocChunk: 0 kB' 'Percpu: 65664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2036692 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 181403648 kB' 00:03:23.758 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.758 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.758 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.758 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.758 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.758 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.758 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.758 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.758 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.758 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.758 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.758 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.758 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.758 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.758 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.759 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 93521844 kB' 'MemUsed: 4140840 kB' 'SwapCached: 0 kB' 'Active: 1999260 kB' 'Inactive: 90556 kB' 'Active(anon): 1783664 kB' 'Inactive(anon): 0 kB' 'Active(file): 215596 kB' 'Inactive(file): 90556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1697504 kB' 'Mapped: 129260 kB' 'AnonPages: 395524 kB' 'Shmem: 1391352 kB' 'KernelStack: 12712 kB' 'PageTables: 5672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104236 kB' 'Slab: 334220 kB' 'SReclaimable: 104236 kB' 'SUnreclaim: 229984 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.760 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718496 kB' 'MemFree: 82020892 kB' 'MemUsed: 11697604 kB' 'SwapCached: 0 kB' 'Active: 4191680 kB' 'Inactive: 3360064 kB' 'Active(anon): 4020696 kB' 'Inactive(anon): 0 kB' 'Active(file): 170984 kB' 'Inactive(file): 3360064 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7419876 kB' 'Mapped: 68652 kB' 'AnonPages: 132152 kB' 'Shmem: 3888828 kB' 'KernelStack: 8344 kB' 'PageTables: 4024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 112288 kB' 'Slab: 394336 kB' 'SReclaimable: 112288 kB' 'SUnreclaim: 282048 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.761 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.762 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:23.763 node0=512 expecting 512 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:23.763 node1=1024 expecting 1024 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:23.763 00:03:23.763 real 0m3.055s 00:03:23.763 user 0m1.236s 00:03:23.763 sys 0m1.887s 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:23.763 18:18:09 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:23.763 ************************************ 00:03:23.763 END TEST custom_alloc 00:03:23.763 ************************************ 00:03:23.763 18:18:09 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:23.763 18:18:09 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:23.763 18:18:09 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:23.763 18:18:09 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:23.763 18:18:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:23.763 ************************************ 00:03:23.763 START TEST no_shrink_alloc 00:03:23.763 ************************************ 00:03:23.763 18:18:09 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:23.763 18:18:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:23.763 18:18:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:23.763 18:18:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:23.763 18:18:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:23.763 18:18:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:23.763 18:18:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:23.763 18:18:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:23.763 18:18:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:23.763 18:18:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:23.763 18:18:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:23.763 18:18:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:23.763 18:18:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:23.763 18:18:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:23.763 18:18:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:23.763 18:18:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:23.763 18:18:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:23.763 18:18:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:23.763 18:18:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:23.763 18:18:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:23.763 18:18:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:23.763 18:18:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.763 18:18:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:26.297 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.297 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:26.297 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.297 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.559 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.559 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.559 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.559 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.559 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:26.559 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.559 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.559 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.559 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.559 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.559 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.559 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.559 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:26.559 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:26.559 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:26.559 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.559 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.559 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:26.559 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:26.559 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:26.559 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.559 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.559 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176619008 kB' 'MemAvailable: 179420008 kB' 'Buffers: 4132 kB' 'Cached: 9113332 kB' 'SwapCached: 0 kB' 'Active: 6189928 kB' 'Inactive: 3450620 kB' 'Active(anon): 5803348 kB' 'Inactive(anon): 0 kB' 'Active(file): 386580 kB' 'Inactive(file): 3450620 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525844 kB' 'Mapped: 197228 kB' 'Shmem: 5280264 kB' 'KReclaimable: 216500 kB' 'Slab: 729204 kB' 'SReclaimable: 216500 kB' 'SUnreclaim: 512704 kB' 'KernelStack: 20688 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7297920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315540 kB' 'VmallocChunk: 0 kB' 'Percpu: 65664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2036692 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 181403648 kB' 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.560 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.561 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176620732 kB' 'MemAvailable: 179421732 kB' 'Buffers: 4132 kB' 'Cached: 9113336 kB' 'SwapCached: 0 kB' 'Active: 6189900 kB' 'Inactive: 3450620 kB' 'Active(anon): 5803320 kB' 'Inactive(anon): 0 kB' 'Active(file): 386580 kB' 'Inactive(file): 3450620 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526304 kB' 'Mapped: 197144 kB' 'Shmem: 5280268 kB' 'KReclaimable: 216500 kB' 'Slab: 729152 kB' 'SReclaimable: 216500 kB' 'SUnreclaim: 512652 kB' 'KernelStack: 20784 kB' 'PageTables: 9124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7297568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315476 kB' 'VmallocChunk: 0 kB' 'Percpu: 65664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2036692 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 181403648 kB' 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.562 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.563 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176620764 kB' 'MemAvailable: 179421764 kB' 'Buffers: 4132 kB' 'Cached: 9113352 kB' 'SwapCached: 0 kB' 'Active: 6189404 kB' 'Inactive: 3450620 kB' 'Active(anon): 5802824 kB' 'Inactive(anon): 0 kB' 'Active(file): 386580 kB' 'Inactive(file): 3450620 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525756 kB' 'Mapped: 197144 kB' 'Shmem: 5280284 kB' 'KReclaimable: 216500 kB' 'Slab: 729152 kB' 'SReclaimable: 216500 kB' 'SUnreclaim: 512652 kB' 'KernelStack: 20832 kB' 'PageTables: 9092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7297596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315444 kB' 'VmallocChunk: 0 kB' 'Percpu: 65664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2036692 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 181403648 kB' 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.564 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.827 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:26.828 nr_hugepages=1024 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.828 resv_hugepages=0 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.828 surplus_hugepages=0 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.828 anon_hugepages=0 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176617672 kB' 'MemAvailable: 179418672 kB' 'Buffers: 4132 kB' 'Cached: 9113372 kB' 'SwapCached: 0 kB' 'Active: 6191284 kB' 'Inactive: 3450620 kB' 'Active(anon): 5804704 kB' 'Inactive(anon): 0 kB' 'Active(file): 386580 kB' 'Inactive(file): 3450620 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527620 kB' 'Mapped: 197640 kB' 'Shmem: 5280304 kB' 'KReclaimable: 216500 kB' 'Slab: 729152 kB' 'SReclaimable: 216500 kB' 'SUnreclaim: 512652 kB' 'KernelStack: 20704 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7300560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315396 kB' 'VmallocChunk: 0 kB' 'Percpu: 65664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2036692 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 181403648 kB' 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.828 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.829 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 92478680 kB' 'MemUsed: 5184004 kB' 'SwapCached: 0 kB' 'Active: 2005756 kB' 'Inactive: 90556 kB' 'Active(anon): 1790160 kB' 'Inactive(anon): 0 kB' 'Active(file): 215596 kB' 'Inactive(file): 90556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1697552 kB' 'Mapped: 128976 kB' 'AnonPages: 401980 kB' 'Shmem: 1391400 kB' 'KernelStack: 12680 kB' 'PageTables: 5620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104212 kB' 'Slab: 334360 kB' 'SReclaimable: 104212 kB' 'SUnreclaim: 230148 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.830 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:26.831 node0=1024 expecting 1024 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.831 18:18:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:29.364 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:29.364 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:29.364 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:29.364 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:29.364 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:29.364 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:29.364 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:29.364 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:29.364 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:29.364 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:29.364 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:29.364 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:29.364 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:29.364 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:29.364 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:29.364 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:29.364 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:29.627 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:29.627 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:29.627 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:29.627 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:29.627 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:29.627 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:29.627 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:29.627 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:29.627 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:29.627 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:29.627 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:29.627 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:29.627 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.627 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.627 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.627 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.627 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.627 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.627 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.627 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.627 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176634952 kB' 'MemAvailable: 179435952 kB' 'Buffers: 4132 kB' 'Cached: 9113468 kB' 'SwapCached: 0 kB' 'Active: 6190744 kB' 'Inactive: 3450620 kB' 'Active(anon): 5804164 kB' 'Inactive(anon): 0 kB' 'Active(file): 386580 kB' 'Inactive(file): 3450620 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526528 kB' 'Mapped: 197308 kB' 'Shmem: 5280400 kB' 'KReclaimable: 216500 kB' 'Slab: 729128 kB' 'SReclaimable: 216500 kB' 'SUnreclaim: 512628 kB' 'KernelStack: 20608 kB' 'PageTables: 8192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7296164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315412 kB' 'VmallocChunk: 0 kB' 'Percpu: 65664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2036692 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 181403648 kB' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.628 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176635560 kB' 'MemAvailable: 179436560 kB' 'Buffers: 4132 kB' 'Cached: 9113472 kB' 'SwapCached: 0 kB' 'Active: 6189492 kB' 'Inactive: 3450620 kB' 'Active(anon): 5802912 kB' 'Inactive(anon): 0 kB' 'Active(file): 386580 kB' 'Inactive(file): 3450620 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525744 kB' 'Mapped: 197148 kB' 'Shmem: 5280404 kB' 'KReclaimable: 216500 kB' 'Slab: 729416 kB' 'SReclaimable: 216500 kB' 'SUnreclaim: 512916 kB' 'KernelStack: 20640 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7296180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315396 kB' 'VmallocChunk: 0 kB' 'Percpu: 65664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2036692 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 181403648 kB' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.629 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176635560 kB' 'MemAvailable: 179436560 kB' 'Buffers: 4132 kB' 'Cached: 9113492 kB' 'SwapCached: 0 kB' 'Active: 6189516 kB' 'Inactive: 3450620 kB' 'Active(anon): 5802936 kB' 'Inactive(anon): 0 kB' 'Active(file): 386580 kB' 'Inactive(file): 3450620 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525744 kB' 'Mapped: 197148 kB' 'Shmem: 5280424 kB' 'KReclaimable: 216500 kB' 'Slab: 729408 kB' 'SReclaimable: 216500 kB' 'SUnreclaim: 512908 kB' 'KernelStack: 20640 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7296204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315396 kB' 'VmallocChunk: 0 kB' 'Percpu: 65664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2036692 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 181403648 kB' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.630 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:29.631 nr_hugepages=1024 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:29.631 resv_hugepages=0 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:29.631 surplus_hugepages=0 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:29.631 anon_hugepages=0 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.631 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176635840 kB' 'MemAvailable: 179436840 kB' 'Buffers: 4132 kB' 'Cached: 9113532 kB' 'SwapCached: 0 kB' 'Active: 6189200 kB' 'Inactive: 3450620 kB' 'Active(anon): 5802620 kB' 'Inactive(anon): 0 kB' 'Active(file): 386580 kB' 'Inactive(file): 3450620 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525376 kB' 'Mapped: 197148 kB' 'Shmem: 5280464 kB' 'KReclaimable: 216500 kB' 'Slab: 729408 kB' 'SReclaimable: 216500 kB' 'SUnreclaim: 512908 kB' 'KernelStack: 20624 kB' 'PageTables: 8368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7296224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315396 kB' 'VmallocChunk: 0 kB' 'Percpu: 65664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2036692 kB' 'DirectMap2M: 18614272 kB' 'DirectMap1G: 181403648 kB' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.632 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 92504372 kB' 'MemUsed: 5158312 kB' 'SwapCached: 0 kB' 'Active: 2000216 kB' 'Inactive: 90556 kB' 'Active(anon): 1784620 kB' 'Inactive(anon): 0 kB' 'Active(file): 215596 kB' 'Inactive(file): 90556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1697680 kB' 'Mapped: 128476 kB' 'AnonPages: 396236 kB' 'Shmem: 1391528 kB' 'KernelStack: 12680 kB' 'PageTables: 5612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104212 kB' 'Slab: 334324 kB' 'SReclaimable: 104212 kB' 'SUnreclaim: 230112 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.633 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:29.634 node0=1024 expecting 1024 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:29.634 00:03:29.634 real 0m5.968s 00:03:29.634 user 0m2.455s 00:03:29.634 sys 0m3.651s 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:29.634 18:18:15 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:29.634 ************************************ 00:03:29.634 END TEST no_shrink_alloc 00:03:29.634 ************************************ 00:03:29.893 18:18:15 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:29.893 18:18:15 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:29.893 18:18:15 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:29.893 18:18:15 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:29.893 18:18:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:29.893 18:18:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:29.893 18:18:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:29.893 18:18:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:29.893 18:18:15 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:29.893 18:18:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:29.893 18:18:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:29.893 18:18:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:29.893 18:18:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:29.893 18:18:15 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:29.893 18:18:15 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:29.893 00:03:29.893 real 0m23.272s 00:03:29.893 user 0m8.995s 00:03:29.893 sys 0m13.402s 00:03:29.893 18:18:15 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:29.893 18:18:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:29.893 ************************************ 00:03:29.893 END TEST hugepages 00:03:29.893 ************************************ 00:03:29.893 18:18:15 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:29.893 18:18:15 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:29.893 18:18:15 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:29.893 18:18:15 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.893 18:18:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:29.893 ************************************ 00:03:29.893 START TEST driver 00:03:29.893 ************************************ 00:03:29.893 18:18:15 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:29.893 * Looking for test storage... 00:03:29.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:29.893 18:18:15 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:29.893 18:18:15 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:29.893 18:18:15 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:34.082 18:18:19 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:34.082 18:18:19 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:34.082 18:18:19 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.082 18:18:19 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:34.082 ************************************ 00:03:34.082 START TEST guess_driver 00:03:34.082 ************************************ 00:03:34.082 18:18:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:34.082 18:18:19 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:34.082 18:18:19 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:34.082 18:18:19 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:34.082 18:18:19 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:34.082 18:18:19 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:34.082 18:18:19 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:34.082 18:18:19 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:34.082 18:18:19 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:34.082 18:18:19 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:34.082 18:18:19 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:03:34.082 18:18:19 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:34.082 18:18:19 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:34.082 18:18:19 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:34.082 18:18:19 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:34.082 18:18:19 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:34.082 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:34.082 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:34.082 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:34.082 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:34.082 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:34.082 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:34.082 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:34.082 18:18:19 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:34.082 18:18:19 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:34.082 18:18:19 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:34.082 18:18:19 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:34.082 18:18:19 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:34.082 Looking for driver=vfio-pci 00:03:34.082 18:18:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.082 18:18:19 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:34.082 18:18:19 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.082 18:18:19 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.421 18:18:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.356 18:18:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.356 18:18:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.356 18:18:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.614 18:18:23 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:38.614 18:18:23 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:38.614 18:18:23 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:38.614 18:18:23 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:42.847 00:03:42.847 real 0m8.510s 00:03:42.847 user 0m2.307s 00:03:42.847 sys 0m4.104s 00:03:42.847 18:18:28 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.847 18:18:28 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:42.847 ************************************ 00:03:42.847 END TEST guess_driver 00:03:42.847 ************************************ 00:03:42.847 18:18:28 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:42.847 00:03:42.847 real 0m12.760s 00:03:42.847 user 0m3.501s 00:03:42.847 sys 0m6.330s 00:03:42.847 18:18:28 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.847 18:18:28 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:42.847 ************************************ 00:03:42.847 END TEST driver 00:03:42.847 ************************************ 00:03:42.847 18:18:28 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:42.847 18:18:28 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:42.847 18:18:28 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.847 18:18:28 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.847 18:18:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:42.847 ************************************ 00:03:42.847 START TEST devices 00:03:42.847 ************************************ 00:03:42.847 18:18:28 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:42.847 * Looking for test storage... 00:03:42.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:42.847 18:18:28 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:42.847 18:18:28 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:42.847 18:18:28 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:42.847 18:18:28 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.166 18:18:31 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:46.166 18:18:31 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:46.166 18:18:31 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:46.166 18:18:31 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:46.166 18:18:31 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:46.166 18:18:31 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:46.166 18:18:31 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:46.166 18:18:31 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:46.166 18:18:31 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:46.166 18:18:31 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:46.166 18:18:31 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:46.166 18:18:31 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:46.166 18:18:31 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:46.166 18:18:31 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:46.166 18:18:31 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:46.166 18:18:31 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:46.166 18:18:31 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:46.166 18:18:31 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:46.166 18:18:31 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:46.166 18:18:31 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:46.166 18:18:31 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:46.166 18:18:31 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:46.166 No valid GPT data, bailing 00:03:46.166 18:18:31 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:46.166 18:18:31 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:46.166 18:18:31 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:46.166 18:18:31 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:46.166 18:18:31 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:46.166 18:18:31 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:46.166 18:18:31 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:03:46.166 18:18:31 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:03:46.166 18:18:31 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:46.166 18:18:31 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:46.166 18:18:31 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:46.166 18:18:31 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:46.166 18:18:31 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:46.166 18:18:31 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.166 18:18:31 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.166 18:18:31 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:46.166 ************************************ 00:03:46.166 START TEST nvme_mount 00:03:46.166 ************************************ 00:03:46.166 18:18:31 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:46.166 18:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:46.166 18:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:46.166 18:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.166 18:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:46.166 18:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:46.166 18:18:31 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:46.166 18:18:31 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:46.166 18:18:31 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:46.166 18:18:31 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:46.166 18:18:31 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:46.166 18:18:31 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:46.166 18:18:31 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:46.166 18:18:31 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:46.166 18:18:31 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:46.166 18:18:31 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:46.166 18:18:31 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:46.166 18:18:31 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:46.166 18:18:31 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:46.166 18:18:31 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:47.103 Creating new GPT entries in memory. 00:03:47.103 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:47.103 other utilities. 00:03:47.103 18:18:32 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:47.103 18:18:32 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:47.103 18:18:32 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:47.103 18:18:32 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:47.103 18:18:32 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:48.040 Creating new GPT entries in memory. 00:03:48.040 The operation has completed successfully. 00:03:48.040 18:18:33 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:48.040 18:18:33 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:48.040 18:18:33 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3716470 00:03:48.040 18:18:33 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.040 18:18:33 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:48.040 18:18:33 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.040 18:18:33 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:48.040 18:18:33 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:48.040 18:18:33 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.040 18:18:33 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:48.040 18:18:33 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:48.040 18:18:33 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:48.040 18:18:33 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.040 18:18:33 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:48.040 18:18:33 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:48.040 18:18:33 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:48.040 18:18:33 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:48.040 18:18:33 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:48.040 18:18:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.040 18:18:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:48.040 18:18:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:48.040 18:18:33 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.040 18:18:33 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:51.320 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:51.320 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:51.320 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:03:51.320 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:51.320 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:51.320 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.321 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:51.321 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:51.321 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:51.321 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:51.321 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:51.321 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.321 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:51.321 18:18:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:51.321 18:18:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.321 18:18:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.852 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.111 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:54.111 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:54.111 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.111 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:54.111 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:54.111 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.111 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:03:54.111 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:54.111 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:54.111 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:54.111 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:54.111 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:54.111 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:54.111 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:54.111 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:54.111 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.111 18:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:54.111 18:18:39 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.111 18:18:39 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:57.396 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:57.396 00:03:57.396 real 0m10.989s 00:03:57.396 user 0m3.295s 00:03:57.396 sys 0m5.553s 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.396 18:18:42 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:57.396 ************************************ 00:03:57.396 END TEST nvme_mount 00:03:57.396 ************************************ 00:03:57.396 18:18:42 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:57.396 18:18:42 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:57.396 18:18:42 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.396 18:18:42 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.396 18:18:42 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:57.396 ************************************ 00:03:57.396 START TEST dm_mount 00:03:57.396 ************************************ 00:03:57.396 18:18:42 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:57.396 18:18:42 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:57.396 18:18:42 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:57.396 18:18:42 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:57.396 18:18:42 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:57.396 18:18:42 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:57.396 18:18:42 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:57.396 18:18:42 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:57.396 18:18:42 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:57.396 18:18:42 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:57.396 18:18:42 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:57.396 18:18:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:57.396 18:18:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:57.396 18:18:42 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:57.396 18:18:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:57.396 18:18:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:57.396 18:18:42 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:57.396 18:18:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:57.396 18:18:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:57.396 18:18:42 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:57.396 18:18:42 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:57.396 18:18:42 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:57.962 Creating new GPT entries in memory. 00:03:57.962 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:57.962 other utilities. 00:03:57.962 18:18:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:57.962 18:18:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:57.962 18:18:43 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:57.962 18:18:43 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:57.962 18:18:43 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:59.336 Creating new GPT entries in memory. 00:03:59.336 The operation has completed successfully. 00:03:59.336 18:18:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:59.336 18:18:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:59.336 18:18:44 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:59.336 18:18:44 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:59.336 18:18:44 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:00.271 The operation has completed successfully. 00:04:00.271 18:18:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:00.271 18:18:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:00.271 18:18:45 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3720608 00:04:00.271 18:18:45 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:00.271 18:18:45 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:00.271 18:18:45 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:00.271 18:18:45 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:00.271 18:18:45 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:00.271 18:18:45 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:00.271 18:18:45 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:00.271 18:18:45 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:00.271 18:18:45 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:00.271 18:18:45 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:00.271 18:18:45 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:00.271 18:18:45 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:00.271 18:18:45 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:00.271 18:18:45 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:00.271 18:18:45 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:00.271 18:18:45 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:00.271 18:18:45 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:00.271 18:18:45 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:00.271 18:18:45 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:00.271 18:18:45 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:00.271 18:18:45 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:00.271 18:18:45 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:00.271 18:18:45 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:00.271 18:18:45 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:00.271 18:18:45 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:00.272 18:18:45 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:00.272 18:18:45 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:00.272 18:18:45 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:00.272 18:18:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.272 18:18:45 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:00.272 18:18:45 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:00.272 18:18:45 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.272 18:18:45 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.806 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.064 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:03.064 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:03.064 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:03.064 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:03.064 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:03.064 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:03.065 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:03.065 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:03.065 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:03.065 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:03.065 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:03.065 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:03.065 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:03.065 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:03.065 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.065 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:03.065 18:18:48 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:03.065 18:18:48 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.065 18:18:48 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:06.354 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.355 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:06.355 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:06.355 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:06.355 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:06.355 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:06.355 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:06.355 18:18:51 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:06.355 00:04:06.355 real 0m8.910s 00:04:06.355 user 0m2.239s 00:04:06.355 sys 0m3.723s 00:04:06.355 18:18:51 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.355 18:18:51 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:06.355 ************************************ 00:04:06.355 END TEST dm_mount 00:04:06.355 ************************************ 00:04:06.355 18:18:51 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:06.355 18:18:51 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:06.355 18:18:51 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:06.355 18:18:51 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.355 18:18:51 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:06.355 18:18:51 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:06.355 18:18:51 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:06.355 18:18:51 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:06.355 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:06.355 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:06.355 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:06.355 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:06.355 18:18:51 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:06.355 18:18:51 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.355 18:18:51 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:06.355 18:18:51 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:06.355 18:18:51 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:06.355 18:18:51 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:06.355 18:18:51 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:06.355 00:04:06.355 real 0m23.612s 00:04:06.355 user 0m6.817s 00:04:06.355 sys 0m11.586s 00:04:06.355 18:18:51 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.355 18:18:51 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:06.355 ************************************ 00:04:06.355 END TEST devices 00:04:06.355 ************************************ 00:04:06.355 18:18:51 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:06.355 00:04:06.355 real 1m21.022s 00:04:06.355 user 0m26.472s 00:04:06.355 sys 0m43.715s 00:04:06.355 18:18:51 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.355 18:18:51 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:06.355 ************************************ 00:04:06.355 END TEST setup.sh 00:04:06.355 ************************************ 00:04:06.355 18:18:51 -- common/autotest_common.sh@1142 -- # return 0 00:04:06.355 18:18:51 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:09.643 Hugepages 00:04:09.643 node hugesize free / total 00:04:09.643 node0 1048576kB 0 / 0 00:04:09.643 node0 2048kB 2048 / 2048 00:04:09.643 node1 1048576kB 0 / 0 00:04:09.643 node1 2048kB 0 / 0 00:04:09.643 00:04:09.643 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:09.643 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:09.643 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:09.643 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:09.643 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:09.643 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:09.643 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:09.643 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:09.643 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:09.643 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:09.643 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:09.643 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:09.643 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:09.643 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:09.643 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:09.643 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:09.643 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:09.643 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:09.643 18:18:54 -- spdk/autotest.sh@130 -- # uname -s 00:04:09.643 18:18:54 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:09.643 18:18:54 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:09.643 18:18:54 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:12.175 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:12.175 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:12.175 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:12.175 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:12.175 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:12.175 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:12.175 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:12.175 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:12.175 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:12.175 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:12.175 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:12.175 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:12.175 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:12.175 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:12.175 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:12.175 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:13.548 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:13.806 18:18:59 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:14.746 18:19:00 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:14.746 18:19:00 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:14.746 18:19:00 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:14.746 18:19:00 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:14.746 18:19:00 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:14.746 18:19:00 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:14.746 18:19:00 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:14.746 18:19:00 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:14.746 18:19:00 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:14.746 18:19:00 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:14.746 18:19:00 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:14.746 18:19:00 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:18.031 Waiting for block devices as requested 00:04:18.031 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:18.031 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:18.031 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:18.031 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:18.031 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:18.031 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:18.031 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:18.031 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:18.289 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:18.289 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:18.289 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:18.547 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:18.547 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:18.547 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:18.547 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:18.806 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:18.806 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:18.806 18:19:04 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:18.806 18:19:04 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:18.806 18:19:04 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:18.806 18:19:04 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:04:18.806 18:19:04 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:18.806 18:19:04 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:18.806 18:19:04 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:18.806 18:19:04 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:18.806 18:19:04 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:18.806 18:19:04 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:18.806 18:19:04 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:18.806 18:19:04 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:18.806 18:19:04 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:18.806 18:19:04 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:04:18.806 18:19:04 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:18.806 18:19:04 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:18.806 18:19:04 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:18.806 18:19:04 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:18.806 18:19:04 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:18.806 18:19:04 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:18.806 18:19:04 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:18.806 18:19:04 -- common/autotest_common.sh@1557 -- # continue 00:04:18.806 18:19:04 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:18.806 18:19:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:18.806 18:19:04 -- common/autotest_common.sh@10 -- # set +x 00:04:19.066 18:19:04 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:19.066 18:19:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:19.066 18:19:04 -- common/autotest_common.sh@10 -- # set +x 00:04:19.066 18:19:04 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:21.672 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:21.672 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:21.672 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:21.672 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:21.672 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:21.672 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:21.672 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:21.672 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:21.672 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:21.672 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:21.672 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:21.931 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:21.931 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:21.931 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:21.931 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:21.931 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:23.309 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:23.309 18:19:08 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:23.309 18:19:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:23.309 18:19:08 -- common/autotest_common.sh@10 -- # set +x 00:04:23.309 18:19:08 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:23.309 18:19:08 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:23.309 18:19:08 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:23.309 18:19:08 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:23.309 18:19:08 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:23.309 18:19:08 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:23.309 18:19:08 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:23.309 18:19:08 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:23.309 18:19:08 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:23.309 18:19:08 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:23.309 18:19:08 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:23.568 18:19:08 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:23.568 18:19:08 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:23.568 18:19:08 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:23.568 18:19:08 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:23.568 18:19:08 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:23.568 18:19:08 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:23.568 18:19:08 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:23.568 18:19:08 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5e:00.0 00:04:23.568 18:19:08 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5e:00.0 ]] 00:04:23.568 18:19:08 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=3729484 00:04:23.568 18:19:08 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.568 18:19:08 -- common/autotest_common.sh@1598 -- # waitforlisten 3729484 00:04:23.568 18:19:08 -- common/autotest_common.sh@829 -- # '[' -z 3729484 ']' 00:04:23.568 18:19:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.568 18:19:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:23.568 18:19:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.568 18:19:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:23.568 18:19:08 -- common/autotest_common.sh@10 -- # set +x 00:04:23.568 [2024-07-15 18:19:08.984136] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:04:23.568 [2024-07-15 18:19:08.984181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3729484 ] 00:04:23.568 EAL: No free 2048 kB hugepages reported on node 1 00:04:23.568 [2024-07-15 18:19:09.051374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.826 [2024-07-15 18:19:09.130763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.393 18:19:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:24.393 18:19:09 -- common/autotest_common.sh@862 -- # return 0 00:04:24.393 18:19:09 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:24.393 18:19:09 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:24.393 18:19:09 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:27.675 nvme0n1 00:04:27.675 18:19:12 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:27.675 [2024-07-15 18:19:12.918818] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:27.675 request: 00:04:27.675 { 00:04:27.676 "nvme_ctrlr_name": "nvme0", 00:04:27.676 "password": "test", 00:04:27.676 "method": "bdev_nvme_opal_revert", 00:04:27.676 "req_id": 1 00:04:27.676 } 00:04:27.676 Got JSON-RPC error response 00:04:27.676 response: 00:04:27.676 { 00:04:27.676 "code": -32602, 00:04:27.676 "message": "Invalid parameters" 00:04:27.676 } 00:04:27.676 18:19:12 -- common/autotest_common.sh@1604 -- # true 00:04:27.676 18:19:12 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:27.676 18:19:12 -- common/autotest_common.sh@1608 -- # killprocess 3729484 00:04:27.676 18:19:12 -- common/autotest_common.sh@948 -- # '[' -z 3729484 ']' 00:04:27.676 18:19:12 -- common/autotest_common.sh@952 -- # kill -0 3729484 00:04:27.676 18:19:12 -- common/autotest_common.sh@953 -- # uname 00:04:27.676 18:19:12 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:27.676 18:19:12 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3729484 00:04:27.676 18:19:12 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:27.676 18:19:12 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:27.676 18:19:12 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3729484' 00:04:27.676 killing process with pid 3729484 00:04:27.676 18:19:12 -- common/autotest_common.sh@967 -- # kill 3729484 00:04:27.676 18:19:12 -- common/autotest_common.sh@972 -- # wait 3729484 00:04:29.572 18:19:15 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:29.572 18:19:15 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:29.572 18:19:15 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:29.572 18:19:15 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:29.572 18:19:15 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:29.572 18:19:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:29.572 18:19:15 -- common/autotest_common.sh@10 -- # set +x 00:04:29.572 18:19:15 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:29.572 18:19:15 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:29.572 18:19:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.572 18:19:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.572 18:19:15 -- common/autotest_common.sh@10 -- # set +x 00:04:29.572 ************************************ 00:04:29.572 START TEST env 00:04:29.572 ************************************ 00:04:29.572 18:19:15 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:29.830 * Looking for test storage... 00:04:29.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:29.830 18:19:15 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:29.830 18:19:15 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.830 18:19:15 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.830 18:19:15 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.830 ************************************ 00:04:29.830 START TEST env_memory 00:04:29.830 ************************************ 00:04:29.830 18:19:15 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:29.830 00:04:29.830 00:04:29.830 CUnit - A unit testing framework for C - Version 2.1-3 00:04:29.830 http://cunit.sourceforge.net/ 00:04:29.830 00:04:29.830 00:04:29.830 Suite: memory 00:04:29.830 Test: alloc and free memory map ...[2024-07-15 18:19:15.293434] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:29.830 passed 00:04:29.830 Test: mem map translation ...[2024-07-15 18:19:15.311176] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:29.830 [2024-07-15 18:19:15.311193] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:29.830 [2024-07-15 18:19:15.311226] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:29.830 [2024-07-15 18:19:15.311234] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:29.830 passed 00:04:29.830 Test: mem map registration ...[2024-07-15 18:19:15.347328] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:29.830 [2024-07-15 18:19:15.347349] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:29.830 passed 00:04:30.089 Test: mem map adjacent registrations ...passed 00:04:30.089 00:04:30.089 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.089 suites 1 1 n/a 0 0 00:04:30.089 tests 4 4 4 0 0 00:04:30.089 asserts 152 152 152 0 n/a 00:04:30.089 00:04:30.089 Elapsed time = 0.134 seconds 00:04:30.089 00:04:30.089 real 0m0.147s 00:04:30.089 user 0m0.136s 00:04:30.089 sys 0m0.010s 00:04:30.089 18:19:15 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.089 18:19:15 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:30.089 ************************************ 00:04:30.089 END TEST env_memory 00:04:30.089 ************************************ 00:04:30.089 18:19:15 env -- common/autotest_common.sh@1142 -- # return 0 00:04:30.089 18:19:15 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:30.089 18:19:15 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.089 18:19:15 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.089 18:19:15 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.089 ************************************ 00:04:30.089 START TEST env_vtophys 00:04:30.089 ************************************ 00:04:30.089 18:19:15 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:30.089 EAL: lib.eal log level changed from notice to debug 00:04:30.089 EAL: Detected lcore 0 as core 0 on socket 0 00:04:30.089 EAL: Detected lcore 1 as core 1 on socket 0 00:04:30.089 EAL: Detected lcore 2 as core 2 on socket 0 00:04:30.089 EAL: Detected lcore 3 as core 3 on socket 0 00:04:30.089 EAL: Detected lcore 4 as core 4 on socket 0 00:04:30.089 EAL: Detected lcore 5 as core 5 on socket 0 00:04:30.089 EAL: Detected lcore 6 as core 6 on socket 0 00:04:30.089 EAL: Detected lcore 7 as core 8 on socket 0 00:04:30.089 EAL: Detected lcore 8 as core 9 on socket 0 00:04:30.089 EAL: Detected lcore 9 as core 10 on socket 0 00:04:30.089 EAL: Detected lcore 10 as core 11 on socket 0 00:04:30.089 EAL: Detected lcore 11 as core 12 on socket 0 00:04:30.089 EAL: Detected lcore 12 as core 13 on socket 0 00:04:30.089 EAL: Detected lcore 13 as core 16 on socket 0 00:04:30.089 EAL: Detected lcore 14 as core 17 on socket 0 00:04:30.089 EAL: Detected lcore 15 as core 18 on socket 0 00:04:30.089 EAL: Detected lcore 16 as core 19 on socket 0 00:04:30.089 EAL: Detected lcore 17 as core 20 on socket 0 00:04:30.089 EAL: Detected lcore 18 as core 21 on socket 0 00:04:30.089 EAL: Detected lcore 19 as core 25 on socket 0 00:04:30.089 EAL: Detected lcore 20 as core 26 on socket 0 00:04:30.089 EAL: Detected lcore 21 as core 27 on socket 0 00:04:30.089 EAL: Detected lcore 22 as core 28 on socket 0 00:04:30.089 EAL: Detected lcore 23 as core 29 on socket 0 00:04:30.089 EAL: Detected lcore 24 as core 0 on socket 1 00:04:30.089 EAL: Detected lcore 25 as core 1 on socket 1 00:04:30.089 EAL: Detected lcore 26 as core 2 on socket 1 00:04:30.089 EAL: Detected lcore 27 as core 3 on socket 1 00:04:30.089 EAL: Detected lcore 28 as core 4 on socket 1 00:04:30.089 EAL: Detected lcore 29 as core 5 on socket 1 00:04:30.089 EAL: Detected lcore 30 as core 6 on socket 1 00:04:30.089 EAL: Detected lcore 31 as core 8 on socket 1 00:04:30.089 EAL: Detected lcore 32 as core 10 on socket 1 00:04:30.089 EAL: Detected lcore 33 as core 11 on socket 1 00:04:30.089 EAL: Detected lcore 34 as core 12 on socket 1 00:04:30.089 EAL: Detected lcore 35 as core 13 on socket 1 00:04:30.089 EAL: Detected lcore 36 as core 16 on socket 1 00:04:30.089 EAL: Detected lcore 37 as core 17 on socket 1 00:04:30.089 EAL: Detected lcore 38 as core 18 on socket 1 00:04:30.089 EAL: Detected lcore 39 as core 19 on socket 1 00:04:30.089 EAL: Detected lcore 40 as core 20 on socket 1 00:04:30.089 EAL: Detected lcore 41 as core 21 on socket 1 00:04:30.089 EAL: Detected lcore 42 as core 24 on socket 1 00:04:30.089 EAL: Detected lcore 43 as core 25 on socket 1 00:04:30.089 EAL: Detected lcore 44 as core 26 on socket 1 00:04:30.089 EAL: Detected lcore 45 as core 27 on socket 1 00:04:30.089 EAL: Detected lcore 46 as core 28 on socket 1 00:04:30.089 EAL: Detected lcore 47 as core 29 on socket 1 00:04:30.089 EAL: Detected lcore 48 as core 0 on socket 0 00:04:30.089 EAL: Detected lcore 49 as core 1 on socket 0 00:04:30.089 EAL: Detected lcore 50 as core 2 on socket 0 00:04:30.089 EAL: Detected lcore 51 as core 3 on socket 0 00:04:30.089 EAL: Detected lcore 52 as core 4 on socket 0 00:04:30.089 EAL: Detected lcore 53 as core 5 on socket 0 00:04:30.089 EAL: Detected lcore 54 as core 6 on socket 0 00:04:30.089 EAL: Detected lcore 55 as core 8 on socket 0 00:04:30.089 EAL: Detected lcore 56 as core 9 on socket 0 00:04:30.089 EAL: Detected lcore 57 as core 10 on socket 0 00:04:30.089 EAL: Detected lcore 58 as core 11 on socket 0 00:04:30.089 EAL: Detected lcore 59 as core 12 on socket 0 00:04:30.089 EAL: Detected lcore 60 as core 13 on socket 0 00:04:30.089 EAL: Detected lcore 61 as core 16 on socket 0 00:04:30.089 EAL: Detected lcore 62 as core 17 on socket 0 00:04:30.089 EAL: Detected lcore 63 as core 18 on socket 0 00:04:30.089 EAL: Detected lcore 64 as core 19 on socket 0 00:04:30.089 EAL: Detected lcore 65 as core 20 on socket 0 00:04:30.089 EAL: Detected lcore 66 as core 21 on socket 0 00:04:30.089 EAL: Detected lcore 67 as core 25 on socket 0 00:04:30.089 EAL: Detected lcore 68 as core 26 on socket 0 00:04:30.089 EAL: Detected lcore 69 as core 27 on socket 0 00:04:30.089 EAL: Detected lcore 70 as core 28 on socket 0 00:04:30.089 EAL: Detected lcore 71 as core 29 on socket 0 00:04:30.089 EAL: Detected lcore 72 as core 0 on socket 1 00:04:30.089 EAL: Detected lcore 73 as core 1 on socket 1 00:04:30.089 EAL: Detected lcore 74 as core 2 on socket 1 00:04:30.089 EAL: Detected lcore 75 as core 3 on socket 1 00:04:30.089 EAL: Detected lcore 76 as core 4 on socket 1 00:04:30.089 EAL: Detected lcore 77 as core 5 on socket 1 00:04:30.089 EAL: Detected lcore 78 as core 6 on socket 1 00:04:30.089 EAL: Detected lcore 79 as core 8 on socket 1 00:04:30.089 EAL: Detected lcore 80 as core 10 on socket 1 00:04:30.089 EAL: Detected lcore 81 as core 11 on socket 1 00:04:30.089 EAL: Detected lcore 82 as core 12 on socket 1 00:04:30.089 EAL: Detected lcore 83 as core 13 on socket 1 00:04:30.089 EAL: Detected lcore 84 as core 16 on socket 1 00:04:30.089 EAL: Detected lcore 85 as core 17 on socket 1 00:04:30.089 EAL: Detected lcore 86 as core 18 on socket 1 00:04:30.089 EAL: Detected lcore 87 as core 19 on socket 1 00:04:30.089 EAL: Detected lcore 88 as core 20 on socket 1 00:04:30.089 EAL: Detected lcore 89 as core 21 on socket 1 00:04:30.089 EAL: Detected lcore 90 as core 24 on socket 1 00:04:30.089 EAL: Detected lcore 91 as core 25 on socket 1 00:04:30.089 EAL: Detected lcore 92 as core 26 on socket 1 00:04:30.089 EAL: Detected lcore 93 as core 27 on socket 1 00:04:30.089 EAL: Detected lcore 94 as core 28 on socket 1 00:04:30.089 EAL: Detected lcore 95 as core 29 on socket 1 00:04:30.089 EAL: Maximum logical cores by configuration: 128 00:04:30.089 EAL: Detected CPU lcores: 96 00:04:30.089 EAL: Detected NUMA nodes: 2 00:04:30.089 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:30.089 EAL: Detected shared linkage of DPDK 00:04:30.089 EAL: No shared files mode enabled, IPC will be disabled 00:04:30.089 EAL: Bus pci wants IOVA as 'DC' 00:04:30.089 EAL: Buses did not request a specific IOVA mode. 00:04:30.089 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:30.089 EAL: Selected IOVA mode 'VA' 00:04:30.089 EAL: No free 2048 kB hugepages reported on node 1 00:04:30.089 EAL: Probing VFIO support... 00:04:30.089 EAL: IOMMU type 1 (Type 1) is supported 00:04:30.089 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:30.089 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:30.089 EAL: VFIO support initialized 00:04:30.089 EAL: Ask a virtual area of 0x2e000 bytes 00:04:30.089 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:30.089 EAL: Setting up physically contiguous memory... 00:04:30.089 EAL: Setting maximum number of open files to 524288 00:04:30.089 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:30.089 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:30.089 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:30.089 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.089 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:30.089 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.089 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.089 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:30.089 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:30.089 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.089 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:30.089 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.089 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.089 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:30.089 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:30.089 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.089 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:30.089 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.089 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.089 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:30.089 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:30.089 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.089 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:30.089 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.089 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.089 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:30.089 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:30.089 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:30.089 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.089 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:30.089 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:30.089 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.089 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:30.089 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:30.089 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.089 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:30.089 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:30.089 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.089 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:30.089 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:30.089 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.089 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:30.089 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:30.089 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.089 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:30.089 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:30.089 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.089 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:30.089 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:30.089 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.089 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:30.089 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:30.089 EAL: Hugepages will be freed exactly as allocated. 00:04:30.089 EAL: No shared files mode enabled, IPC is disabled 00:04:30.089 EAL: No shared files mode enabled, IPC is disabled 00:04:30.089 EAL: TSC frequency is ~2100000 KHz 00:04:30.089 EAL: Main lcore 0 is ready (tid=7f6e81305a00;cpuset=[0]) 00:04:30.089 EAL: Trying to obtain current memory policy. 00:04:30.089 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.089 EAL: Restoring previous memory policy: 0 00:04:30.089 EAL: request: mp_malloc_sync 00:04:30.089 EAL: No shared files mode enabled, IPC is disabled 00:04:30.089 EAL: Heap on socket 0 was expanded by 2MB 00:04:30.089 EAL: No shared files mode enabled, IPC is disabled 00:04:30.089 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:30.089 EAL: Mem event callback 'spdk:(nil)' registered 00:04:30.089 00:04:30.089 00:04:30.089 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.089 http://cunit.sourceforge.net/ 00:04:30.089 00:04:30.089 00:04:30.089 Suite: components_suite 00:04:30.089 Test: vtophys_malloc_test ...passed 00:04:30.089 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:30.089 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.089 EAL: Restoring previous memory policy: 4 00:04:30.089 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.089 EAL: request: mp_malloc_sync 00:04:30.089 EAL: No shared files mode enabled, IPC is disabled 00:04:30.089 EAL: Heap on socket 0 was expanded by 4MB 00:04:30.089 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.089 EAL: request: mp_malloc_sync 00:04:30.089 EAL: No shared files mode enabled, IPC is disabled 00:04:30.089 EAL: Heap on socket 0 was shrunk by 4MB 00:04:30.089 EAL: Trying to obtain current memory policy. 00:04:30.089 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.089 EAL: Restoring previous memory policy: 4 00:04:30.089 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.089 EAL: request: mp_malloc_sync 00:04:30.089 EAL: No shared files mode enabled, IPC is disabled 00:04:30.089 EAL: Heap on socket 0 was expanded by 6MB 00:04:30.089 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.089 EAL: request: mp_malloc_sync 00:04:30.089 EAL: No shared files mode enabled, IPC is disabled 00:04:30.089 EAL: Heap on socket 0 was shrunk by 6MB 00:04:30.089 EAL: Trying to obtain current memory policy. 00:04:30.089 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.089 EAL: Restoring previous memory policy: 4 00:04:30.089 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.089 EAL: request: mp_malloc_sync 00:04:30.089 EAL: No shared files mode enabled, IPC is disabled 00:04:30.089 EAL: Heap on socket 0 was expanded by 10MB 00:04:30.089 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.089 EAL: request: mp_malloc_sync 00:04:30.089 EAL: No shared files mode enabled, IPC is disabled 00:04:30.089 EAL: Heap on socket 0 was shrunk by 10MB 00:04:30.089 EAL: Trying to obtain current memory policy. 00:04:30.089 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.090 EAL: Restoring previous memory policy: 4 00:04:30.090 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.090 EAL: request: mp_malloc_sync 00:04:30.090 EAL: No shared files mode enabled, IPC is disabled 00:04:30.090 EAL: Heap on socket 0 was expanded by 18MB 00:04:30.090 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.090 EAL: request: mp_malloc_sync 00:04:30.090 EAL: No shared files mode enabled, IPC is disabled 00:04:30.090 EAL: Heap on socket 0 was shrunk by 18MB 00:04:30.090 EAL: Trying to obtain current memory policy. 00:04:30.090 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.090 EAL: Restoring previous memory policy: 4 00:04:30.090 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.090 EAL: request: mp_malloc_sync 00:04:30.090 EAL: No shared files mode enabled, IPC is disabled 00:04:30.090 EAL: Heap on socket 0 was expanded by 34MB 00:04:30.090 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.090 EAL: request: mp_malloc_sync 00:04:30.090 EAL: No shared files mode enabled, IPC is disabled 00:04:30.090 EAL: Heap on socket 0 was shrunk by 34MB 00:04:30.090 EAL: Trying to obtain current memory policy. 00:04:30.090 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.090 EAL: Restoring previous memory policy: 4 00:04:30.090 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.090 EAL: request: mp_malloc_sync 00:04:30.090 EAL: No shared files mode enabled, IPC is disabled 00:04:30.090 EAL: Heap on socket 0 was expanded by 66MB 00:04:30.090 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.090 EAL: request: mp_malloc_sync 00:04:30.090 EAL: No shared files mode enabled, IPC is disabled 00:04:30.090 EAL: Heap on socket 0 was shrunk by 66MB 00:04:30.090 EAL: Trying to obtain current memory policy. 00:04:30.090 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.090 EAL: Restoring previous memory policy: 4 00:04:30.090 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.090 EAL: request: mp_malloc_sync 00:04:30.090 EAL: No shared files mode enabled, IPC is disabled 00:04:30.090 EAL: Heap on socket 0 was expanded by 130MB 00:04:30.090 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.348 EAL: request: mp_malloc_sync 00:04:30.348 EAL: No shared files mode enabled, IPC is disabled 00:04:30.348 EAL: Heap on socket 0 was shrunk by 130MB 00:04:30.348 EAL: Trying to obtain current memory policy. 00:04:30.348 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.348 EAL: Restoring previous memory policy: 4 00:04:30.348 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.348 EAL: request: mp_malloc_sync 00:04:30.348 EAL: No shared files mode enabled, IPC is disabled 00:04:30.348 EAL: Heap on socket 0 was expanded by 258MB 00:04:30.348 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.348 EAL: request: mp_malloc_sync 00:04:30.348 EAL: No shared files mode enabled, IPC is disabled 00:04:30.348 EAL: Heap on socket 0 was shrunk by 258MB 00:04:30.348 EAL: Trying to obtain current memory policy. 00:04:30.348 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.348 EAL: Restoring previous memory policy: 4 00:04:30.348 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.348 EAL: request: mp_malloc_sync 00:04:30.348 EAL: No shared files mode enabled, IPC is disabled 00:04:30.348 EAL: Heap on socket 0 was expanded by 514MB 00:04:30.607 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.607 EAL: request: mp_malloc_sync 00:04:30.607 EAL: No shared files mode enabled, IPC is disabled 00:04:30.607 EAL: Heap on socket 0 was shrunk by 514MB 00:04:30.607 EAL: Trying to obtain current memory policy. 00:04:30.607 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.864 EAL: Restoring previous memory policy: 4 00:04:30.864 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.864 EAL: request: mp_malloc_sync 00:04:30.864 EAL: No shared files mode enabled, IPC is disabled 00:04:30.864 EAL: Heap on socket 0 was expanded by 1026MB 00:04:30.864 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.123 EAL: request: mp_malloc_sync 00:04:31.123 EAL: No shared files mode enabled, IPC is disabled 00:04:31.123 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:31.123 passed 00:04:31.123 00:04:31.123 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.123 suites 1 1 n/a 0 0 00:04:31.123 tests 2 2 2 0 0 00:04:31.123 asserts 497 497 497 0 n/a 00:04:31.123 00:04:31.123 Elapsed time = 0.964 seconds 00:04:31.123 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.123 EAL: request: mp_malloc_sync 00:04:31.123 EAL: No shared files mode enabled, IPC is disabled 00:04:31.123 EAL: Heap on socket 0 was shrunk by 2MB 00:04:31.123 EAL: No shared files mode enabled, IPC is disabled 00:04:31.123 EAL: No shared files mode enabled, IPC is disabled 00:04:31.123 EAL: No shared files mode enabled, IPC is disabled 00:04:31.123 00:04:31.123 real 0m1.090s 00:04:31.123 user 0m0.642s 00:04:31.123 sys 0m0.418s 00:04:31.123 18:19:16 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.123 18:19:16 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:31.123 ************************************ 00:04:31.123 END TEST env_vtophys 00:04:31.123 ************************************ 00:04:31.123 18:19:16 env -- common/autotest_common.sh@1142 -- # return 0 00:04:31.123 18:19:16 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:31.123 18:19:16 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.123 18:19:16 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.123 18:19:16 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.123 ************************************ 00:04:31.123 START TEST env_pci 00:04:31.123 ************************************ 00:04:31.123 18:19:16 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:31.123 00:04:31.123 00:04:31.123 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.123 http://cunit.sourceforge.net/ 00:04:31.123 00:04:31.123 00:04:31.123 Suite: pci 00:04:31.123 Test: pci_hook ...[2024-07-15 18:19:16.642863] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3730816 has claimed it 00:04:31.123 EAL: Cannot find device (10000:00:01.0) 00:04:31.123 EAL: Failed to attach device on primary process 00:04:31.123 passed 00:04:31.123 00:04:31.123 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.123 suites 1 1 n/a 0 0 00:04:31.123 tests 1 1 1 0 0 00:04:31.123 asserts 25 25 25 0 n/a 00:04:31.123 00:04:31.123 Elapsed time = 0.029 seconds 00:04:31.123 00:04:31.123 real 0m0.049s 00:04:31.123 user 0m0.015s 00:04:31.123 sys 0m0.034s 00:04:31.123 18:19:16 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.123 18:19:16 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:31.123 ************************************ 00:04:31.123 END TEST env_pci 00:04:31.123 ************************************ 00:04:31.381 18:19:16 env -- common/autotest_common.sh@1142 -- # return 0 00:04:31.381 18:19:16 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:31.381 18:19:16 env -- env/env.sh@15 -- # uname 00:04:31.381 18:19:16 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:31.381 18:19:16 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:31.381 18:19:16 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:31.381 18:19:16 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:31.381 18:19:16 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.381 18:19:16 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.381 ************************************ 00:04:31.381 START TEST env_dpdk_post_init 00:04:31.381 ************************************ 00:04:31.381 18:19:16 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:31.381 EAL: Detected CPU lcores: 96 00:04:31.381 EAL: Detected NUMA nodes: 2 00:04:31.381 EAL: Detected shared linkage of DPDK 00:04:31.381 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:31.381 EAL: Selected IOVA mode 'VA' 00:04:31.381 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.381 EAL: VFIO support initialized 00:04:31.381 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:31.381 EAL: Using IOMMU type 1 (Type 1) 00:04:31.381 EAL: Ignore mapping IO port bar(1) 00:04:31.381 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:31.381 EAL: Ignore mapping IO port bar(1) 00:04:31.381 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:31.381 EAL: Ignore mapping IO port bar(1) 00:04:31.381 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:31.381 EAL: Ignore mapping IO port bar(1) 00:04:31.381 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:31.381 EAL: Ignore mapping IO port bar(1) 00:04:31.381 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:31.638 EAL: Ignore mapping IO port bar(1) 00:04:31.639 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:31.639 EAL: Ignore mapping IO port bar(1) 00:04:31.639 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:31.639 EAL: Ignore mapping IO port bar(1) 00:04:31.639 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:32.203 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:32.203 EAL: Ignore mapping IO port bar(1) 00:04:32.203 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:32.203 EAL: Ignore mapping IO port bar(1) 00:04:32.203 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:32.203 EAL: Ignore mapping IO port bar(1) 00:04:32.203 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:32.203 EAL: Ignore mapping IO port bar(1) 00:04:32.203 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:32.460 EAL: Ignore mapping IO port bar(1) 00:04:32.460 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:32.460 EAL: Ignore mapping IO port bar(1) 00:04:32.460 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:32.460 EAL: Ignore mapping IO port bar(1) 00:04:32.460 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:32.460 EAL: Ignore mapping IO port bar(1) 00:04:32.460 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:35.740 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:35.740 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:36.306 Starting DPDK initialization... 00:04:36.306 Starting SPDK post initialization... 00:04:36.306 SPDK NVMe probe 00:04:36.306 Attaching to 0000:5e:00.0 00:04:36.306 Attached to 0000:5e:00.0 00:04:36.306 Cleaning up... 00:04:36.306 00:04:36.306 real 0m4.836s 00:04:36.306 user 0m3.745s 00:04:36.306 sys 0m0.157s 00:04:36.306 18:19:21 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.306 18:19:21 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:36.306 ************************************ 00:04:36.306 END TEST env_dpdk_post_init 00:04:36.306 ************************************ 00:04:36.306 18:19:21 env -- common/autotest_common.sh@1142 -- # return 0 00:04:36.306 18:19:21 env -- env/env.sh@26 -- # uname 00:04:36.306 18:19:21 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:36.306 18:19:21 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:36.306 18:19:21 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.306 18:19:21 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.306 18:19:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.306 ************************************ 00:04:36.306 START TEST env_mem_callbacks 00:04:36.306 ************************************ 00:04:36.306 18:19:21 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:36.306 EAL: Detected CPU lcores: 96 00:04:36.306 EAL: Detected NUMA nodes: 2 00:04:36.306 EAL: Detected shared linkage of DPDK 00:04:36.306 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:36.306 EAL: Selected IOVA mode 'VA' 00:04:36.306 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.306 EAL: VFIO support initialized 00:04:36.306 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:36.306 00:04:36.306 00:04:36.306 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.306 http://cunit.sourceforge.net/ 00:04:36.306 00:04:36.306 00:04:36.306 Suite: memory 00:04:36.306 Test: test ... 00:04:36.306 register 0x200000200000 2097152 00:04:36.306 malloc 3145728 00:04:36.306 register 0x200000400000 4194304 00:04:36.306 buf 0x200000500000 len 3145728 PASSED 00:04:36.306 malloc 64 00:04:36.306 buf 0x2000004fff40 len 64 PASSED 00:04:36.306 malloc 4194304 00:04:36.306 register 0x200000800000 6291456 00:04:36.306 buf 0x200000a00000 len 4194304 PASSED 00:04:36.306 free 0x200000500000 3145728 00:04:36.306 free 0x2000004fff40 64 00:04:36.306 unregister 0x200000400000 4194304 PASSED 00:04:36.306 free 0x200000a00000 4194304 00:04:36.307 unregister 0x200000800000 6291456 PASSED 00:04:36.307 malloc 8388608 00:04:36.307 register 0x200000400000 10485760 00:04:36.307 buf 0x200000600000 len 8388608 PASSED 00:04:36.307 free 0x200000600000 8388608 00:04:36.307 unregister 0x200000400000 10485760 PASSED 00:04:36.307 passed 00:04:36.307 00:04:36.307 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.307 suites 1 1 n/a 0 0 00:04:36.307 tests 1 1 1 0 0 00:04:36.307 asserts 15 15 15 0 n/a 00:04:36.307 00:04:36.307 Elapsed time = 0.008 seconds 00:04:36.307 00:04:36.307 real 0m0.057s 00:04:36.307 user 0m0.019s 00:04:36.307 sys 0m0.037s 00:04:36.307 18:19:21 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.307 18:19:21 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:36.307 ************************************ 00:04:36.307 END TEST env_mem_callbacks 00:04:36.307 ************************************ 00:04:36.307 18:19:21 env -- common/autotest_common.sh@1142 -- # return 0 00:04:36.307 00:04:36.307 real 0m6.622s 00:04:36.307 user 0m4.730s 00:04:36.307 sys 0m0.958s 00:04:36.307 18:19:21 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.307 18:19:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.307 ************************************ 00:04:36.307 END TEST env 00:04:36.307 ************************************ 00:04:36.307 18:19:21 -- common/autotest_common.sh@1142 -- # return 0 00:04:36.307 18:19:21 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:36.307 18:19:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.307 18:19:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.307 18:19:21 -- common/autotest_common.sh@10 -- # set +x 00:04:36.307 ************************************ 00:04:36.307 START TEST rpc 00:04:36.307 ************************************ 00:04:36.307 18:19:21 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:36.565 * Looking for test storage... 00:04:36.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:36.565 18:19:21 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3731852 00:04:36.565 18:19:21 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.565 18:19:21 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:36.565 18:19:21 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3731852 00:04:36.565 18:19:21 rpc -- common/autotest_common.sh@829 -- # '[' -z 3731852 ']' 00:04:36.565 18:19:21 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.565 18:19:21 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:36.565 18:19:21 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.565 18:19:21 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:36.565 18:19:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.565 [2024-07-15 18:19:21.961585] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:04:36.565 [2024-07-15 18:19:21.961628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3731852 ] 00:04:36.565 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.565 [2024-07-15 18:19:22.027956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.565 [2024-07-15 18:19:22.106685] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:36.565 [2024-07-15 18:19:22.106720] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3731852' to capture a snapshot of events at runtime. 00:04:36.565 [2024-07-15 18:19:22.106726] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:36.565 [2024-07-15 18:19:22.106732] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:36.565 [2024-07-15 18:19:22.106737] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3731852 for offline analysis/debug. 00:04:36.565 [2024-07-15 18:19:22.106755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.500 18:19:22 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:37.500 18:19:22 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:37.500 18:19:22 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:37.500 18:19:22 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:37.500 18:19:22 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:37.500 18:19:22 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:37.500 18:19:22 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.500 18:19:22 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.500 18:19:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.500 ************************************ 00:04:37.500 START TEST rpc_integrity 00:04:37.500 ************************************ 00:04:37.500 18:19:22 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:37.500 18:19:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:37.500 18:19:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.500 18:19:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.500 18:19:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.500 18:19:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:37.501 18:19:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:37.501 18:19:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:37.501 18:19:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:37.501 18:19:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.501 18:19:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.501 18:19:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.501 18:19:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:37.501 18:19:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:37.501 18:19:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.501 18:19:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.501 18:19:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.501 18:19:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:37.501 { 00:04:37.501 "name": "Malloc0", 00:04:37.501 "aliases": [ 00:04:37.501 "c32009c0-ee99-4d30-a275-58bf99f845ec" 00:04:37.501 ], 00:04:37.501 "product_name": "Malloc disk", 00:04:37.501 "block_size": 512, 00:04:37.501 "num_blocks": 16384, 00:04:37.501 "uuid": "c32009c0-ee99-4d30-a275-58bf99f845ec", 00:04:37.501 "assigned_rate_limits": { 00:04:37.501 "rw_ios_per_sec": 0, 00:04:37.501 "rw_mbytes_per_sec": 0, 00:04:37.501 "r_mbytes_per_sec": 0, 00:04:37.501 "w_mbytes_per_sec": 0 00:04:37.501 }, 00:04:37.501 "claimed": false, 00:04:37.501 "zoned": false, 00:04:37.501 "supported_io_types": { 00:04:37.501 "read": true, 00:04:37.501 "write": true, 00:04:37.501 "unmap": true, 00:04:37.501 "flush": true, 00:04:37.501 "reset": true, 00:04:37.501 "nvme_admin": false, 00:04:37.501 "nvme_io": false, 00:04:37.501 "nvme_io_md": false, 00:04:37.501 "write_zeroes": true, 00:04:37.501 "zcopy": true, 00:04:37.501 "get_zone_info": false, 00:04:37.501 "zone_management": false, 00:04:37.501 "zone_append": false, 00:04:37.501 "compare": false, 00:04:37.501 "compare_and_write": false, 00:04:37.501 "abort": true, 00:04:37.501 "seek_hole": false, 00:04:37.501 "seek_data": false, 00:04:37.501 "copy": true, 00:04:37.501 "nvme_iov_md": false 00:04:37.501 }, 00:04:37.501 "memory_domains": [ 00:04:37.501 { 00:04:37.501 "dma_device_id": "system", 00:04:37.501 "dma_device_type": 1 00:04:37.501 }, 00:04:37.501 { 00:04:37.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.501 "dma_device_type": 2 00:04:37.501 } 00:04:37.501 ], 00:04:37.501 "driver_specific": {} 00:04:37.501 } 00:04:37.501 ]' 00:04:37.501 18:19:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:37.501 18:19:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:37.501 18:19:22 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:37.501 18:19:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.501 18:19:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.501 [2024-07-15 18:19:22.916781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:37.501 [2024-07-15 18:19:22.916808] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:37.501 [2024-07-15 18:19:22.916820] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23182d0 00:04:37.501 [2024-07-15 18:19:22.916826] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:37.501 [2024-07-15 18:19:22.917865] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:37.501 [2024-07-15 18:19:22.917887] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:37.501 Passthru0 00:04:37.501 18:19:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.501 18:19:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:37.501 18:19:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.501 18:19:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.501 18:19:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.501 18:19:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:37.501 { 00:04:37.501 "name": "Malloc0", 00:04:37.501 "aliases": [ 00:04:37.501 "c32009c0-ee99-4d30-a275-58bf99f845ec" 00:04:37.501 ], 00:04:37.501 "product_name": "Malloc disk", 00:04:37.501 "block_size": 512, 00:04:37.501 "num_blocks": 16384, 00:04:37.501 "uuid": "c32009c0-ee99-4d30-a275-58bf99f845ec", 00:04:37.501 "assigned_rate_limits": { 00:04:37.501 "rw_ios_per_sec": 0, 00:04:37.501 "rw_mbytes_per_sec": 0, 00:04:37.501 "r_mbytes_per_sec": 0, 00:04:37.501 "w_mbytes_per_sec": 0 00:04:37.501 }, 00:04:37.501 "claimed": true, 00:04:37.501 "claim_type": "exclusive_write", 00:04:37.501 "zoned": false, 00:04:37.501 "supported_io_types": { 00:04:37.501 "read": true, 00:04:37.501 "write": true, 00:04:37.501 "unmap": true, 00:04:37.501 "flush": true, 00:04:37.501 "reset": true, 00:04:37.501 "nvme_admin": false, 00:04:37.501 "nvme_io": false, 00:04:37.501 "nvme_io_md": false, 00:04:37.501 "write_zeroes": true, 00:04:37.501 "zcopy": true, 00:04:37.501 "get_zone_info": false, 00:04:37.501 "zone_management": false, 00:04:37.501 "zone_append": false, 00:04:37.501 "compare": false, 00:04:37.501 "compare_and_write": false, 00:04:37.501 "abort": true, 00:04:37.501 "seek_hole": false, 00:04:37.501 "seek_data": false, 00:04:37.501 "copy": true, 00:04:37.501 "nvme_iov_md": false 00:04:37.501 }, 00:04:37.501 "memory_domains": [ 00:04:37.501 { 00:04:37.501 "dma_device_id": "system", 00:04:37.501 "dma_device_type": 1 00:04:37.501 }, 00:04:37.501 { 00:04:37.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.501 "dma_device_type": 2 00:04:37.501 } 00:04:37.501 ], 00:04:37.501 "driver_specific": {} 00:04:37.501 }, 00:04:37.501 { 00:04:37.501 "name": "Passthru0", 00:04:37.501 "aliases": [ 00:04:37.501 "19e250bd-2a06-58a2-b6f5-d7f3de8cd92a" 00:04:37.501 ], 00:04:37.501 "product_name": "passthru", 00:04:37.501 "block_size": 512, 00:04:37.501 "num_blocks": 16384, 00:04:37.501 "uuid": "19e250bd-2a06-58a2-b6f5-d7f3de8cd92a", 00:04:37.501 "assigned_rate_limits": { 00:04:37.501 "rw_ios_per_sec": 0, 00:04:37.501 "rw_mbytes_per_sec": 0, 00:04:37.501 "r_mbytes_per_sec": 0, 00:04:37.501 "w_mbytes_per_sec": 0 00:04:37.501 }, 00:04:37.501 "claimed": false, 00:04:37.501 "zoned": false, 00:04:37.501 "supported_io_types": { 00:04:37.501 "read": true, 00:04:37.501 "write": true, 00:04:37.501 "unmap": true, 00:04:37.501 "flush": true, 00:04:37.501 "reset": true, 00:04:37.501 "nvme_admin": false, 00:04:37.501 "nvme_io": false, 00:04:37.501 "nvme_io_md": false, 00:04:37.501 "write_zeroes": true, 00:04:37.501 "zcopy": true, 00:04:37.501 "get_zone_info": false, 00:04:37.501 "zone_management": false, 00:04:37.501 "zone_append": false, 00:04:37.501 "compare": false, 00:04:37.501 "compare_and_write": false, 00:04:37.501 "abort": true, 00:04:37.501 "seek_hole": false, 00:04:37.501 "seek_data": false, 00:04:37.501 "copy": true, 00:04:37.501 "nvme_iov_md": false 00:04:37.501 }, 00:04:37.501 "memory_domains": [ 00:04:37.501 { 00:04:37.501 "dma_device_id": "system", 00:04:37.501 "dma_device_type": 1 00:04:37.501 }, 00:04:37.501 { 00:04:37.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.501 "dma_device_type": 2 00:04:37.501 } 00:04:37.501 ], 00:04:37.501 "driver_specific": { 00:04:37.501 "passthru": { 00:04:37.501 "name": "Passthru0", 00:04:37.501 "base_bdev_name": "Malloc0" 00:04:37.501 } 00:04:37.501 } 00:04:37.501 } 00:04:37.501 ]' 00:04:37.501 18:19:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:37.501 18:19:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:37.501 18:19:22 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:37.501 18:19:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.501 18:19:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.501 18:19:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.501 18:19:23 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:37.501 18:19:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.501 18:19:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.501 18:19:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.501 18:19:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:37.501 18:19:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.501 18:19:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.501 18:19:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.501 18:19:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:37.501 18:19:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:37.759 18:19:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:37.759 00:04:37.759 real 0m0.274s 00:04:37.759 user 0m0.180s 00:04:37.760 sys 0m0.031s 00:04:37.760 18:19:23 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.760 18:19:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.760 ************************************ 00:04:37.760 END TEST rpc_integrity 00:04:37.760 ************************************ 00:04:37.760 18:19:23 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:37.760 18:19:23 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:37.760 18:19:23 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.760 18:19:23 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.760 18:19:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.760 ************************************ 00:04:37.760 START TEST rpc_plugins 00:04:37.760 ************************************ 00:04:37.760 18:19:23 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:37.760 18:19:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:37.760 18:19:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.760 18:19:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.760 18:19:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.760 18:19:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:37.760 18:19:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:37.760 18:19:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.760 18:19:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.760 18:19:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.760 18:19:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:37.760 { 00:04:37.760 "name": "Malloc1", 00:04:37.760 "aliases": [ 00:04:37.760 "c1eca637-4fe5-477f-ae98-21f7f176b76e" 00:04:37.760 ], 00:04:37.760 "product_name": "Malloc disk", 00:04:37.760 "block_size": 4096, 00:04:37.760 "num_blocks": 256, 00:04:37.760 "uuid": "c1eca637-4fe5-477f-ae98-21f7f176b76e", 00:04:37.760 "assigned_rate_limits": { 00:04:37.760 "rw_ios_per_sec": 0, 00:04:37.760 "rw_mbytes_per_sec": 0, 00:04:37.760 "r_mbytes_per_sec": 0, 00:04:37.760 "w_mbytes_per_sec": 0 00:04:37.760 }, 00:04:37.760 "claimed": false, 00:04:37.760 "zoned": false, 00:04:37.760 "supported_io_types": { 00:04:37.760 "read": true, 00:04:37.760 "write": true, 00:04:37.760 "unmap": true, 00:04:37.760 "flush": true, 00:04:37.760 "reset": true, 00:04:37.760 "nvme_admin": false, 00:04:37.760 "nvme_io": false, 00:04:37.760 "nvme_io_md": false, 00:04:37.760 "write_zeroes": true, 00:04:37.760 "zcopy": true, 00:04:37.760 "get_zone_info": false, 00:04:37.760 "zone_management": false, 00:04:37.760 "zone_append": false, 00:04:37.760 "compare": false, 00:04:37.760 "compare_and_write": false, 00:04:37.760 "abort": true, 00:04:37.760 "seek_hole": false, 00:04:37.760 "seek_data": false, 00:04:37.760 "copy": true, 00:04:37.760 "nvme_iov_md": false 00:04:37.760 }, 00:04:37.760 "memory_domains": [ 00:04:37.760 { 00:04:37.760 "dma_device_id": "system", 00:04:37.760 "dma_device_type": 1 00:04:37.760 }, 00:04:37.760 { 00:04:37.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.760 "dma_device_type": 2 00:04:37.760 } 00:04:37.760 ], 00:04:37.760 "driver_specific": {} 00:04:37.760 } 00:04:37.760 ]' 00:04:37.760 18:19:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:37.760 18:19:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:37.760 18:19:23 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:37.760 18:19:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.760 18:19:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.760 18:19:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.760 18:19:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:37.760 18:19:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.760 18:19:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.760 18:19:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.760 18:19:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:37.760 18:19:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:37.760 18:19:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:37.760 00:04:37.760 real 0m0.135s 00:04:37.760 user 0m0.086s 00:04:37.760 sys 0m0.017s 00:04:37.760 18:19:23 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.760 18:19:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.760 ************************************ 00:04:37.760 END TEST rpc_plugins 00:04:37.760 ************************************ 00:04:37.760 18:19:23 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:37.760 18:19:23 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:37.760 18:19:23 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.760 18:19:23 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.760 18:19:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.018 ************************************ 00:04:38.018 START TEST rpc_trace_cmd_test 00:04:38.018 ************************************ 00:04:38.018 18:19:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:38.018 18:19:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:38.018 18:19:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:38.018 18:19:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.018 18:19:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:38.018 18:19:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.018 18:19:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:38.018 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3731852", 00:04:38.018 "tpoint_group_mask": "0x8", 00:04:38.018 "iscsi_conn": { 00:04:38.018 "mask": "0x2", 00:04:38.018 "tpoint_mask": "0x0" 00:04:38.018 }, 00:04:38.018 "scsi": { 00:04:38.018 "mask": "0x4", 00:04:38.018 "tpoint_mask": "0x0" 00:04:38.018 }, 00:04:38.018 "bdev": { 00:04:38.018 "mask": "0x8", 00:04:38.018 "tpoint_mask": "0xffffffffffffffff" 00:04:38.018 }, 00:04:38.018 "nvmf_rdma": { 00:04:38.018 "mask": "0x10", 00:04:38.018 "tpoint_mask": "0x0" 00:04:38.018 }, 00:04:38.018 "nvmf_tcp": { 00:04:38.018 "mask": "0x20", 00:04:38.018 "tpoint_mask": "0x0" 00:04:38.018 }, 00:04:38.018 "ftl": { 00:04:38.018 "mask": "0x40", 00:04:38.018 "tpoint_mask": "0x0" 00:04:38.018 }, 00:04:38.018 "blobfs": { 00:04:38.018 "mask": "0x80", 00:04:38.018 "tpoint_mask": "0x0" 00:04:38.018 }, 00:04:38.018 "dsa": { 00:04:38.018 "mask": "0x200", 00:04:38.018 "tpoint_mask": "0x0" 00:04:38.018 }, 00:04:38.018 "thread": { 00:04:38.018 "mask": "0x400", 00:04:38.018 "tpoint_mask": "0x0" 00:04:38.018 }, 00:04:38.018 "nvme_pcie": { 00:04:38.018 "mask": "0x800", 00:04:38.018 "tpoint_mask": "0x0" 00:04:38.018 }, 00:04:38.018 "iaa": { 00:04:38.018 "mask": "0x1000", 00:04:38.018 "tpoint_mask": "0x0" 00:04:38.018 }, 00:04:38.018 "nvme_tcp": { 00:04:38.018 "mask": "0x2000", 00:04:38.018 "tpoint_mask": "0x0" 00:04:38.018 }, 00:04:38.018 "bdev_nvme": { 00:04:38.018 "mask": "0x4000", 00:04:38.018 "tpoint_mask": "0x0" 00:04:38.018 }, 00:04:38.018 "sock": { 00:04:38.018 "mask": "0x8000", 00:04:38.018 "tpoint_mask": "0x0" 00:04:38.018 } 00:04:38.018 }' 00:04:38.018 18:19:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:38.018 18:19:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:38.018 18:19:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:38.018 18:19:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:38.018 18:19:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:38.018 18:19:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:38.018 18:19:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:38.018 18:19:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:38.018 18:19:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:38.018 18:19:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:38.018 00:04:38.018 real 0m0.225s 00:04:38.018 user 0m0.192s 00:04:38.018 sys 0m0.023s 00:04:38.018 18:19:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.018 18:19:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:38.018 ************************************ 00:04:38.018 END TEST rpc_trace_cmd_test 00:04:38.018 ************************************ 00:04:38.276 18:19:23 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:38.277 18:19:23 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:38.277 18:19:23 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:38.277 18:19:23 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:38.277 18:19:23 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.277 18:19:23 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.277 18:19:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.277 ************************************ 00:04:38.277 START TEST rpc_daemon_integrity 00:04:38.277 ************************************ 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:38.277 { 00:04:38.277 "name": "Malloc2", 00:04:38.277 "aliases": [ 00:04:38.277 "a5629671-11da-488f-992e-11d9137ec13c" 00:04:38.277 ], 00:04:38.277 "product_name": "Malloc disk", 00:04:38.277 "block_size": 512, 00:04:38.277 "num_blocks": 16384, 00:04:38.277 "uuid": "a5629671-11da-488f-992e-11d9137ec13c", 00:04:38.277 "assigned_rate_limits": { 00:04:38.277 "rw_ios_per_sec": 0, 00:04:38.277 "rw_mbytes_per_sec": 0, 00:04:38.277 "r_mbytes_per_sec": 0, 00:04:38.277 "w_mbytes_per_sec": 0 00:04:38.277 }, 00:04:38.277 "claimed": false, 00:04:38.277 "zoned": false, 00:04:38.277 "supported_io_types": { 00:04:38.277 "read": true, 00:04:38.277 "write": true, 00:04:38.277 "unmap": true, 00:04:38.277 "flush": true, 00:04:38.277 "reset": true, 00:04:38.277 "nvme_admin": false, 00:04:38.277 "nvme_io": false, 00:04:38.277 "nvme_io_md": false, 00:04:38.277 "write_zeroes": true, 00:04:38.277 "zcopy": true, 00:04:38.277 "get_zone_info": false, 00:04:38.277 "zone_management": false, 00:04:38.277 "zone_append": false, 00:04:38.277 "compare": false, 00:04:38.277 "compare_and_write": false, 00:04:38.277 "abort": true, 00:04:38.277 "seek_hole": false, 00:04:38.277 "seek_data": false, 00:04:38.277 "copy": true, 00:04:38.277 "nvme_iov_md": false 00:04:38.277 }, 00:04:38.277 "memory_domains": [ 00:04:38.277 { 00:04:38.277 "dma_device_id": "system", 00:04:38.277 "dma_device_type": 1 00:04:38.277 }, 00:04:38.277 { 00:04:38.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.277 "dma_device_type": 2 00:04:38.277 } 00:04:38.277 ], 00:04:38.277 "driver_specific": {} 00:04:38.277 } 00:04:38.277 ]' 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.277 [2024-07-15 18:19:23.755032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:38.277 [2024-07-15 18:19:23.755058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:38.277 [2024-07-15 18:19:23.755070] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24afac0 00:04:38.277 [2024-07-15 18:19:23.755076] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:38.277 [2024-07-15 18:19:23.755996] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:38.277 [2024-07-15 18:19:23.756015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:38.277 Passthru0 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:38.277 { 00:04:38.277 "name": "Malloc2", 00:04:38.277 "aliases": [ 00:04:38.277 "a5629671-11da-488f-992e-11d9137ec13c" 00:04:38.277 ], 00:04:38.277 "product_name": "Malloc disk", 00:04:38.277 "block_size": 512, 00:04:38.277 "num_blocks": 16384, 00:04:38.277 "uuid": "a5629671-11da-488f-992e-11d9137ec13c", 00:04:38.277 "assigned_rate_limits": { 00:04:38.277 "rw_ios_per_sec": 0, 00:04:38.277 "rw_mbytes_per_sec": 0, 00:04:38.277 "r_mbytes_per_sec": 0, 00:04:38.277 "w_mbytes_per_sec": 0 00:04:38.277 }, 00:04:38.277 "claimed": true, 00:04:38.277 "claim_type": "exclusive_write", 00:04:38.277 "zoned": false, 00:04:38.277 "supported_io_types": { 00:04:38.277 "read": true, 00:04:38.277 "write": true, 00:04:38.277 "unmap": true, 00:04:38.277 "flush": true, 00:04:38.277 "reset": true, 00:04:38.277 "nvme_admin": false, 00:04:38.277 "nvme_io": false, 00:04:38.277 "nvme_io_md": false, 00:04:38.277 "write_zeroes": true, 00:04:38.277 "zcopy": true, 00:04:38.277 "get_zone_info": false, 00:04:38.277 "zone_management": false, 00:04:38.277 "zone_append": false, 00:04:38.277 "compare": false, 00:04:38.277 "compare_and_write": false, 00:04:38.277 "abort": true, 00:04:38.277 "seek_hole": false, 00:04:38.277 "seek_data": false, 00:04:38.277 "copy": true, 00:04:38.277 "nvme_iov_md": false 00:04:38.277 }, 00:04:38.277 "memory_domains": [ 00:04:38.277 { 00:04:38.277 "dma_device_id": "system", 00:04:38.277 "dma_device_type": 1 00:04:38.277 }, 00:04:38.277 { 00:04:38.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.277 "dma_device_type": 2 00:04:38.277 } 00:04:38.277 ], 00:04:38.277 "driver_specific": {} 00:04:38.277 }, 00:04:38.277 { 00:04:38.277 "name": "Passthru0", 00:04:38.277 "aliases": [ 00:04:38.277 "448e90a4-9bcd-5544-afd1-4937fc8c7c86" 00:04:38.277 ], 00:04:38.277 "product_name": "passthru", 00:04:38.277 "block_size": 512, 00:04:38.277 "num_blocks": 16384, 00:04:38.277 "uuid": "448e90a4-9bcd-5544-afd1-4937fc8c7c86", 00:04:38.277 "assigned_rate_limits": { 00:04:38.277 "rw_ios_per_sec": 0, 00:04:38.277 "rw_mbytes_per_sec": 0, 00:04:38.277 "r_mbytes_per_sec": 0, 00:04:38.277 "w_mbytes_per_sec": 0 00:04:38.277 }, 00:04:38.277 "claimed": false, 00:04:38.277 "zoned": false, 00:04:38.277 "supported_io_types": { 00:04:38.277 "read": true, 00:04:38.277 "write": true, 00:04:38.277 "unmap": true, 00:04:38.277 "flush": true, 00:04:38.277 "reset": true, 00:04:38.277 "nvme_admin": false, 00:04:38.277 "nvme_io": false, 00:04:38.277 "nvme_io_md": false, 00:04:38.277 "write_zeroes": true, 00:04:38.277 "zcopy": true, 00:04:38.277 "get_zone_info": false, 00:04:38.277 "zone_management": false, 00:04:38.277 "zone_append": false, 00:04:38.277 "compare": false, 00:04:38.277 "compare_and_write": false, 00:04:38.277 "abort": true, 00:04:38.277 "seek_hole": false, 00:04:38.277 "seek_data": false, 00:04:38.277 "copy": true, 00:04:38.277 "nvme_iov_md": false 00:04:38.277 }, 00:04:38.277 "memory_domains": [ 00:04:38.277 { 00:04:38.277 "dma_device_id": "system", 00:04:38.277 "dma_device_type": 1 00:04:38.277 }, 00:04:38.277 { 00:04:38.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.277 "dma_device_type": 2 00:04:38.277 } 00:04:38.277 ], 00:04:38.277 "driver_specific": { 00:04:38.277 "passthru": { 00:04:38.277 "name": "Passthru0", 00:04:38.277 "base_bdev_name": "Malloc2" 00:04:38.277 } 00:04:38.277 } 00:04:38.277 } 00:04:38.277 ]' 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.277 18:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.536 18:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.536 18:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:38.536 18:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.536 18:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.536 18:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.536 18:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:38.536 18:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:38.536 18:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:38.536 00:04:38.536 real 0m0.265s 00:04:38.536 user 0m0.169s 00:04:38.536 sys 0m0.036s 00:04:38.536 18:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.536 18:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.536 ************************************ 00:04:38.536 END TEST rpc_daemon_integrity 00:04:38.536 ************************************ 00:04:38.536 18:19:23 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:38.536 18:19:23 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:38.536 18:19:23 rpc -- rpc/rpc.sh@84 -- # killprocess 3731852 00:04:38.536 18:19:23 rpc -- common/autotest_common.sh@948 -- # '[' -z 3731852 ']' 00:04:38.536 18:19:23 rpc -- common/autotest_common.sh@952 -- # kill -0 3731852 00:04:38.536 18:19:23 rpc -- common/autotest_common.sh@953 -- # uname 00:04:38.536 18:19:23 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:38.536 18:19:23 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3731852 00:04:38.536 18:19:23 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:38.536 18:19:23 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:38.536 18:19:23 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3731852' 00:04:38.536 killing process with pid 3731852 00:04:38.536 18:19:23 rpc -- common/autotest_common.sh@967 -- # kill 3731852 00:04:38.536 18:19:23 rpc -- common/autotest_common.sh@972 -- # wait 3731852 00:04:38.793 00:04:38.793 real 0m2.455s 00:04:38.793 user 0m3.173s 00:04:38.793 sys 0m0.670s 00:04:38.793 18:19:24 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.793 18:19:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.793 ************************************ 00:04:38.793 END TEST rpc 00:04:38.793 ************************************ 00:04:38.793 18:19:24 -- common/autotest_common.sh@1142 -- # return 0 00:04:38.793 18:19:24 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:38.793 18:19:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.793 18:19:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.793 18:19:24 -- common/autotest_common.sh@10 -- # set +x 00:04:38.793 ************************************ 00:04:38.793 START TEST skip_rpc 00:04:38.793 ************************************ 00:04:38.793 18:19:24 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:39.051 * Looking for test storage... 00:04:39.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:39.051 18:19:24 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:39.051 18:19:24 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:39.051 18:19:24 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:39.051 18:19:24 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.051 18:19:24 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.051 18:19:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.051 ************************************ 00:04:39.051 START TEST skip_rpc 00:04:39.051 ************************************ 00:04:39.051 18:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:39.051 18:19:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3732481 00:04:39.051 18:19:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.051 18:19:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:39.051 18:19:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:39.051 [2024-07-15 18:19:24.515113] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:04:39.051 [2024-07-15 18:19:24.515148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3732481 ] 00:04:39.051 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.051 [2024-07-15 18:19:24.581461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.309 [2024-07-15 18:19:24.653814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.573 18:19:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:44.573 18:19:29 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:44.573 18:19:29 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:44.573 18:19:29 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:44.573 18:19:29 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:44.573 18:19:29 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:44.573 18:19:29 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:44.573 18:19:29 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:44.573 18:19:29 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.573 18:19:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.573 18:19:29 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:44.573 18:19:29 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:44.573 18:19:29 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:44.573 18:19:29 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:44.573 18:19:29 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:44.573 18:19:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:44.573 18:19:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3732481 00:04:44.573 18:19:29 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 3732481 ']' 00:04:44.573 18:19:29 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 3732481 00:04:44.573 18:19:29 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:44.573 18:19:29 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:44.573 18:19:29 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3732481 00:04:44.573 18:19:29 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:44.573 18:19:29 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:44.573 18:19:29 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3732481' 00:04:44.573 killing process with pid 3732481 00:04:44.573 18:19:29 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 3732481 00:04:44.573 18:19:29 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 3732481 00:04:44.573 00:04:44.573 real 0m5.364s 00:04:44.573 user 0m5.124s 00:04:44.573 sys 0m0.265s 00:04:44.573 18:19:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.573 18:19:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.573 ************************************ 00:04:44.573 END TEST skip_rpc 00:04:44.573 ************************************ 00:04:44.573 18:19:29 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:44.573 18:19:29 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:44.573 18:19:29 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.573 18:19:29 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.573 18:19:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.573 ************************************ 00:04:44.573 START TEST skip_rpc_with_json 00:04:44.573 ************************************ 00:04:44.573 18:19:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:44.573 18:19:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:44.573 18:19:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3733433 00:04:44.573 18:19:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.573 18:19:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:44.573 18:19:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3733433 00:04:44.573 18:19:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 3733433 ']' 00:04:44.573 18:19:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.573 18:19:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.573 18:19:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.573 18:19:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.573 18:19:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.573 [2024-07-15 18:19:29.951779] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:04:44.573 [2024-07-15 18:19:29.951819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3733433 ] 00:04:44.573 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.573 [2024-07-15 18:19:30.019754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.573 [2024-07-15 18:19:30.092096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.510 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:45.510 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:45.510 18:19:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:45.510 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.510 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.510 [2024-07-15 18:19:30.755301] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:45.510 request: 00:04:45.510 { 00:04:45.510 "trtype": "tcp", 00:04:45.510 "method": "nvmf_get_transports", 00:04:45.510 "req_id": 1 00:04:45.510 } 00:04:45.510 Got JSON-RPC error response 00:04:45.510 response: 00:04:45.510 { 00:04:45.510 "code": -19, 00:04:45.510 "message": "No such device" 00:04:45.510 } 00:04:45.510 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:45.510 18:19:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:45.510 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.510 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.510 [2024-07-15 18:19:30.763399] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:45.510 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.510 18:19:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:45.510 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.510 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.510 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.510 18:19:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:45.510 { 00:04:45.510 "subsystems": [ 00:04:45.510 { 00:04:45.510 "subsystem": "vfio_user_target", 00:04:45.510 "config": null 00:04:45.510 }, 00:04:45.510 { 00:04:45.510 "subsystem": "keyring", 00:04:45.510 "config": [] 00:04:45.510 }, 00:04:45.510 { 00:04:45.510 "subsystem": "iobuf", 00:04:45.510 "config": [ 00:04:45.510 { 00:04:45.510 "method": "iobuf_set_options", 00:04:45.510 "params": { 00:04:45.510 "small_pool_count": 8192, 00:04:45.510 "large_pool_count": 1024, 00:04:45.510 "small_bufsize": 8192, 00:04:45.510 "large_bufsize": 135168 00:04:45.510 } 00:04:45.510 } 00:04:45.510 ] 00:04:45.510 }, 00:04:45.510 { 00:04:45.510 "subsystem": "sock", 00:04:45.510 "config": [ 00:04:45.510 { 00:04:45.510 "method": "sock_set_default_impl", 00:04:45.510 "params": { 00:04:45.510 "impl_name": "posix" 00:04:45.510 } 00:04:45.510 }, 00:04:45.510 { 00:04:45.510 "method": "sock_impl_set_options", 00:04:45.510 "params": { 00:04:45.510 "impl_name": "ssl", 00:04:45.510 "recv_buf_size": 4096, 00:04:45.510 "send_buf_size": 4096, 00:04:45.510 "enable_recv_pipe": true, 00:04:45.510 "enable_quickack": false, 00:04:45.510 "enable_placement_id": 0, 00:04:45.510 "enable_zerocopy_send_server": true, 00:04:45.510 "enable_zerocopy_send_client": false, 00:04:45.510 "zerocopy_threshold": 0, 00:04:45.510 "tls_version": 0, 00:04:45.510 "enable_ktls": false 00:04:45.510 } 00:04:45.510 }, 00:04:45.510 { 00:04:45.510 "method": "sock_impl_set_options", 00:04:45.510 "params": { 00:04:45.510 "impl_name": "posix", 00:04:45.510 "recv_buf_size": 2097152, 00:04:45.510 "send_buf_size": 2097152, 00:04:45.510 "enable_recv_pipe": true, 00:04:45.510 "enable_quickack": false, 00:04:45.510 "enable_placement_id": 0, 00:04:45.510 "enable_zerocopy_send_server": true, 00:04:45.510 "enable_zerocopy_send_client": false, 00:04:45.510 "zerocopy_threshold": 0, 00:04:45.510 "tls_version": 0, 00:04:45.510 "enable_ktls": false 00:04:45.510 } 00:04:45.510 } 00:04:45.510 ] 00:04:45.510 }, 00:04:45.510 { 00:04:45.510 "subsystem": "vmd", 00:04:45.510 "config": [] 00:04:45.510 }, 00:04:45.510 { 00:04:45.510 "subsystem": "accel", 00:04:45.510 "config": [ 00:04:45.510 { 00:04:45.510 "method": "accel_set_options", 00:04:45.510 "params": { 00:04:45.510 "small_cache_size": 128, 00:04:45.510 "large_cache_size": 16, 00:04:45.510 "task_count": 2048, 00:04:45.510 "sequence_count": 2048, 00:04:45.510 "buf_count": 2048 00:04:45.510 } 00:04:45.510 } 00:04:45.510 ] 00:04:45.510 }, 00:04:45.510 { 00:04:45.510 "subsystem": "bdev", 00:04:45.510 "config": [ 00:04:45.510 { 00:04:45.510 "method": "bdev_set_options", 00:04:45.510 "params": { 00:04:45.510 "bdev_io_pool_size": 65535, 00:04:45.510 "bdev_io_cache_size": 256, 00:04:45.510 "bdev_auto_examine": true, 00:04:45.510 "iobuf_small_cache_size": 128, 00:04:45.510 "iobuf_large_cache_size": 16 00:04:45.510 } 00:04:45.510 }, 00:04:45.510 { 00:04:45.510 "method": "bdev_raid_set_options", 00:04:45.510 "params": { 00:04:45.510 "process_window_size_kb": 1024 00:04:45.510 } 00:04:45.510 }, 00:04:45.510 { 00:04:45.510 "method": "bdev_iscsi_set_options", 00:04:45.510 "params": { 00:04:45.510 "timeout_sec": 30 00:04:45.510 } 00:04:45.510 }, 00:04:45.510 { 00:04:45.510 "method": "bdev_nvme_set_options", 00:04:45.510 "params": { 00:04:45.510 "action_on_timeout": "none", 00:04:45.510 "timeout_us": 0, 00:04:45.510 "timeout_admin_us": 0, 00:04:45.510 "keep_alive_timeout_ms": 10000, 00:04:45.510 "arbitration_burst": 0, 00:04:45.510 "low_priority_weight": 0, 00:04:45.510 "medium_priority_weight": 0, 00:04:45.510 "high_priority_weight": 0, 00:04:45.510 "nvme_adminq_poll_period_us": 10000, 00:04:45.510 "nvme_ioq_poll_period_us": 0, 00:04:45.510 "io_queue_requests": 0, 00:04:45.510 "delay_cmd_submit": true, 00:04:45.510 "transport_retry_count": 4, 00:04:45.510 "bdev_retry_count": 3, 00:04:45.510 "transport_ack_timeout": 0, 00:04:45.510 "ctrlr_loss_timeout_sec": 0, 00:04:45.510 "reconnect_delay_sec": 0, 00:04:45.510 "fast_io_fail_timeout_sec": 0, 00:04:45.510 "disable_auto_failback": false, 00:04:45.510 "generate_uuids": false, 00:04:45.510 "transport_tos": 0, 00:04:45.510 "nvme_error_stat": false, 00:04:45.510 "rdma_srq_size": 0, 00:04:45.510 "io_path_stat": false, 00:04:45.510 "allow_accel_sequence": false, 00:04:45.510 "rdma_max_cq_size": 0, 00:04:45.510 "rdma_cm_event_timeout_ms": 0, 00:04:45.510 "dhchap_digests": [ 00:04:45.510 "sha256", 00:04:45.510 "sha384", 00:04:45.510 "sha512" 00:04:45.510 ], 00:04:45.510 "dhchap_dhgroups": [ 00:04:45.510 "null", 00:04:45.510 "ffdhe2048", 00:04:45.510 "ffdhe3072", 00:04:45.510 "ffdhe4096", 00:04:45.510 "ffdhe6144", 00:04:45.510 "ffdhe8192" 00:04:45.511 ] 00:04:45.511 } 00:04:45.511 }, 00:04:45.511 { 00:04:45.511 "method": "bdev_nvme_set_hotplug", 00:04:45.511 "params": { 00:04:45.511 "period_us": 100000, 00:04:45.511 "enable": false 00:04:45.511 } 00:04:45.511 }, 00:04:45.511 { 00:04:45.511 "method": "bdev_wait_for_examine" 00:04:45.511 } 00:04:45.511 ] 00:04:45.511 }, 00:04:45.511 { 00:04:45.511 "subsystem": "scsi", 00:04:45.511 "config": null 00:04:45.511 }, 00:04:45.511 { 00:04:45.511 "subsystem": "scheduler", 00:04:45.511 "config": [ 00:04:45.511 { 00:04:45.511 "method": "framework_set_scheduler", 00:04:45.511 "params": { 00:04:45.511 "name": "static" 00:04:45.511 } 00:04:45.511 } 00:04:45.511 ] 00:04:45.511 }, 00:04:45.511 { 00:04:45.511 "subsystem": "vhost_scsi", 00:04:45.511 "config": [] 00:04:45.511 }, 00:04:45.511 { 00:04:45.511 "subsystem": "vhost_blk", 00:04:45.511 "config": [] 00:04:45.511 }, 00:04:45.511 { 00:04:45.511 "subsystem": "ublk", 00:04:45.511 "config": [] 00:04:45.511 }, 00:04:45.511 { 00:04:45.511 "subsystem": "nbd", 00:04:45.511 "config": [] 00:04:45.511 }, 00:04:45.511 { 00:04:45.511 "subsystem": "nvmf", 00:04:45.511 "config": [ 00:04:45.511 { 00:04:45.511 "method": "nvmf_set_config", 00:04:45.511 "params": { 00:04:45.511 "discovery_filter": "match_any", 00:04:45.511 "admin_cmd_passthru": { 00:04:45.511 "identify_ctrlr": false 00:04:45.511 } 00:04:45.511 } 00:04:45.511 }, 00:04:45.511 { 00:04:45.511 "method": "nvmf_set_max_subsystems", 00:04:45.511 "params": { 00:04:45.511 "max_subsystems": 1024 00:04:45.511 } 00:04:45.511 }, 00:04:45.511 { 00:04:45.511 "method": "nvmf_set_crdt", 00:04:45.511 "params": { 00:04:45.511 "crdt1": 0, 00:04:45.511 "crdt2": 0, 00:04:45.511 "crdt3": 0 00:04:45.511 } 00:04:45.511 }, 00:04:45.511 { 00:04:45.511 "method": "nvmf_create_transport", 00:04:45.511 "params": { 00:04:45.511 "trtype": "TCP", 00:04:45.511 "max_queue_depth": 128, 00:04:45.511 "max_io_qpairs_per_ctrlr": 127, 00:04:45.511 "in_capsule_data_size": 4096, 00:04:45.511 "max_io_size": 131072, 00:04:45.511 "io_unit_size": 131072, 00:04:45.511 "max_aq_depth": 128, 00:04:45.511 "num_shared_buffers": 511, 00:04:45.511 "buf_cache_size": 4294967295, 00:04:45.511 "dif_insert_or_strip": false, 00:04:45.511 "zcopy": false, 00:04:45.511 "c2h_success": true, 00:04:45.511 "sock_priority": 0, 00:04:45.511 "abort_timeout_sec": 1, 00:04:45.511 "ack_timeout": 0, 00:04:45.511 "data_wr_pool_size": 0 00:04:45.511 } 00:04:45.511 } 00:04:45.511 ] 00:04:45.511 }, 00:04:45.511 { 00:04:45.511 "subsystem": "iscsi", 00:04:45.511 "config": [ 00:04:45.511 { 00:04:45.511 "method": "iscsi_set_options", 00:04:45.511 "params": { 00:04:45.511 "node_base": "iqn.2016-06.io.spdk", 00:04:45.511 "max_sessions": 128, 00:04:45.511 "max_connections_per_session": 2, 00:04:45.511 "max_queue_depth": 64, 00:04:45.511 "default_time2wait": 2, 00:04:45.511 "default_time2retain": 20, 00:04:45.511 "first_burst_length": 8192, 00:04:45.511 "immediate_data": true, 00:04:45.511 "allow_duplicated_isid": false, 00:04:45.511 "error_recovery_level": 0, 00:04:45.511 "nop_timeout": 60, 00:04:45.511 "nop_in_interval": 30, 00:04:45.511 "disable_chap": false, 00:04:45.511 "require_chap": false, 00:04:45.511 "mutual_chap": false, 00:04:45.511 "chap_group": 0, 00:04:45.511 "max_large_datain_per_connection": 64, 00:04:45.511 "max_r2t_per_connection": 4, 00:04:45.511 "pdu_pool_size": 36864, 00:04:45.511 "immediate_data_pool_size": 16384, 00:04:45.511 "data_out_pool_size": 2048 00:04:45.511 } 00:04:45.511 } 00:04:45.511 ] 00:04:45.511 } 00:04:45.511 ] 00:04:45.511 } 00:04:45.511 18:19:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:45.511 18:19:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3733433 00:04:45.511 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3733433 ']' 00:04:45.511 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3733433 00:04:45.511 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:45.511 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:45.511 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3733433 00:04:45.511 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:45.511 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:45.511 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3733433' 00:04:45.511 killing process with pid 3733433 00:04:45.511 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3733433 00:04:45.511 18:19:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3733433 00:04:45.770 18:19:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3733674 00:04:45.770 18:19:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:45.770 18:19:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:51.039 18:19:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3733674 00:04:51.039 18:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3733674 ']' 00:04:51.039 18:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3733674 00:04:51.039 18:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:51.039 18:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:51.039 18:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3733674 00:04:51.039 18:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:51.039 18:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:51.039 18:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3733674' 00:04:51.039 killing process with pid 3733674 00:04:51.039 18:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3733674 00:04:51.039 18:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3733674 00:04:51.296 18:19:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:51.296 18:19:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:51.296 00:04:51.296 real 0m6.741s 00:04:51.296 user 0m6.555s 00:04:51.296 sys 0m0.607s 00:04:51.296 18:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.296 18:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.296 ************************************ 00:04:51.296 END TEST skip_rpc_with_json 00:04:51.296 ************************************ 00:04:51.296 18:19:36 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:51.297 18:19:36 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:51.297 18:19:36 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.297 18:19:36 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.297 18:19:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.297 ************************************ 00:04:51.297 START TEST skip_rpc_with_delay 00:04:51.297 ************************************ 00:04:51.297 18:19:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:51.297 18:19:36 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:51.297 18:19:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:51.297 18:19:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:51.297 18:19:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.297 18:19:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.297 18:19:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.297 18:19:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.297 18:19:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.297 18:19:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.297 18:19:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.297 18:19:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:51.297 18:19:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:51.297 [2024-07-15 18:19:36.762382] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:51.297 [2024-07-15 18:19:36.762443] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:51.297 18:19:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:51.297 18:19:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:51.297 18:19:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:51.297 18:19:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:51.297 00:04:51.297 real 0m0.065s 00:04:51.297 user 0m0.036s 00:04:51.297 sys 0m0.029s 00:04:51.297 18:19:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.297 18:19:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:51.297 ************************************ 00:04:51.297 END TEST skip_rpc_with_delay 00:04:51.297 ************************************ 00:04:51.297 18:19:36 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:51.297 18:19:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:51.297 18:19:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:51.297 18:19:36 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:51.297 18:19:36 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.297 18:19:36 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.297 18:19:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.297 ************************************ 00:04:51.297 START TEST exit_on_failed_rpc_init 00:04:51.297 ************************************ 00:04:51.297 18:19:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:51.297 18:19:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3734643 00:04:51.297 18:19:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3734643 00:04:51.297 18:19:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:51.297 18:19:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 3734643 ']' 00:04:51.297 18:19:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.297 18:19:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.297 18:19:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.297 18:19:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.297 18:19:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:51.555 [2024-07-15 18:19:36.891628] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:04:51.555 [2024-07-15 18:19:36.891671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3734643 ] 00:04:51.555 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.555 [2024-07-15 18:19:36.955405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.555 [2024-07-15 18:19:37.034103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:52.492 [2024-07-15 18:19:37.740856] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:04:52.492 [2024-07-15 18:19:37.740903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3734684 ] 00:04:52.492 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.492 [2024-07-15 18:19:37.804009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.492 [2024-07-15 18:19:37.875098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.492 [2024-07-15 18:19:37.875165] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:52.492 [2024-07-15 18:19:37.875173] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:52.492 [2024-07-15 18:19:37.875179] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3734643 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 3734643 ']' 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 3734643 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3734643 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3734643' 00:04:52.492 killing process with pid 3734643 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 3734643 00:04:52.492 18:19:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 3734643 00:04:52.749 00:04:52.749 real 0m1.456s 00:04:52.749 user 0m1.660s 00:04:52.749 sys 0m0.420s 00:04:52.749 18:19:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.749 18:19:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:52.749 ************************************ 00:04:52.749 END TEST exit_on_failed_rpc_init 00:04:52.749 ************************************ 00:04:53.007 18:19:38 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:53.007 18:19:38 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:53.007 00:04:53.007 real 0m13.991s 00:04:53.007 user 0m13.518s 00:04:53.007 sys 0m1.570s 00:04:53.007 18:19:38 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.007 18:19:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.007 ************************************ 00:04:53.007 END TEST skip_rpc 00:04:53.007 ************************************ 00:04:53.007 18:19:38 -- common/autotest_common.sh@1142 -- # return 0 00:04:53.007 18:19:38 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:53.007 18:19:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.007 18:19:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.007 18:19:38 -- common/autotest_common.sh@10 -- # set +x 00:04:53.008 ************************************ 00:04:53.008 START TEST rpc_client 00:04:53.008 ************************************ 00:04:53.008 18:19:38 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:53.008 * Looking for test storage... 00:04:53.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:53.008 18:19:38 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:53.008 OK 00:04:53.008 18:19:38 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:53.008 00:04:53.008 real 0m0.112s 00:04:53.008 user 0m0.053s 00:04:53.008 sys 0m0.067s 00:04:53.008 18:19:38 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.008 18:19:38 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:53.008 ************************************ 00:04:53.008 END TEST rpc_client 00:04:53.008 ************************************ 00:04:53.008 18:19:38 -- common/autotest_common.sh@1142 -- # return 0 00:04:53.008 18:19:38 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:53.008 18:19:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.008 18:19:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.008 18:19:38 -- common/autotest_common.sh@10 -- # set +x 00:04:53.280 ************************************ 00:04:53.280 START TEST json_config 00:04:53.280 ************************************ 00:04:53.280 18:19:38 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:53.280 18:19:38 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:53.280 18:19:38 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:53.280 18:19:38 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:53.280 18:19:38 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:53.280 18:19:38 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:53.280 18:19:38 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:53.280 18:19:38 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:53.280 18:19:38 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:53.280 18:19:38 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:53.280 18:19:38 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:53.280 18:19:38 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:53.280 18:19:38 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:53.280 18:19:38 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:04:53.280 18:19:38 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:04:53.280 18:19:38 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:53.280 18:19:38 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:53.280 18:19:38 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:53.280 18:19:38 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:53.280 18:19:38 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:53.280 18:19:38 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:53.280 18:19:38 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:53.281 18:19:38 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:53.281 18:19:38 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.281 18:19:38 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.281 18:19:38 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.281 18:19:38 json_config -- paths/export.sh@5 -- # export PATH 00:04:53.281 18:19:38 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.281 18:19:38 json_config -- nvmf/common.sh@47 -- # : 0 00:04:53.281 18:19:38 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:53.281 18:19:38 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:53.281 18:19:38 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:53.281 18:19:38 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:53.281 18:19:38 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:53.281 18:19:38 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:53.281 18:19:38 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:53.281 18:19:38 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:53.281 18:19:38 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:53.281 18:19:38 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:53.281 18:19:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:53.281 18:19:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:53.281 18:19:38 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:53.281 18:19:38 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:53.281 18:19:38 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:53.281 18:19:38 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:53.281 18:19:38 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:53.281 18:19:38 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:53.281 18:19:38 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:53.281 18:19:38 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:53.281 18:19:38 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:53.281 18:19:38 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:53.281 18:19:38 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:53.281 18:19:38 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:53.281 INFO: JSON configuration test init 00:04:53.281 18:19:38 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:53.281 18:19:38 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:53.281 18:19:38 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:53.281 18:19:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.281 18:19:38 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:53.281 18:19:38 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:53.281 18:19:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.281 18:19:38 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:53.281 18:19:38 json_config -- json_config/common.sh@9 -- # local app=target 00:04:53.281 18:19:38 json_config -- json_config/common.sh@10 -- # shift 00:04:53.281 18:19:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:53.281 18:19:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:53.281 18:19:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:53.281 18:19:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:53.281 18:19:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:53.281 18:19:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3734995 00:04:53.281 18:19:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:53.281 Waiting for target to run... 00:04:53.281 18:19:38 json_config -- json_config/common.sh@25 -- # waitforlisten 3734995 /var/tmp/spdk_tgt.sock 00:04:53.281 18:19:38 json_config -- common/autotest_common.sh@829 -- # '[' -z 3734995 ']' 00:04:53.281 18:19:38 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:53.281 18:19:38 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:53.281 18:19:38 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:53.281 18:19:38 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:53.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:53.281 18:19:38 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:53.281 18:19:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.281 [2024-07-15 18:19:38.732381] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:04:53.281 [2024-07-15 18:19:38.732436] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3734995 ] 00:04:53.281 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.568 [2024-07-15 18:19:39.008078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.568 [2024-07-15 18:19:39.076641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.176 18:19:39 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:54.176 18:19:39 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:54.176 18:19:39 json_config -- json_config/common.sh@26 -- # echo '' 00:04:54.176 00:04:54.176 18:19:39 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:54.176 18:19:39 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:54.176 18:19:39 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:54.176 18:19:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.176 18:19:39 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:54.176 18:19:39 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:54.176 18:19:39 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:54.176 18:19:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.176 18:19:39 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:54.176 18:19:39 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:54.176 18:19:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:57.459 18:19:42 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:57.459 18:19:42 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:57.459 18:19:42 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:57.459 18:19:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.459 18:19:42 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:57.459 18:19:42 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:57.459 18:19:42 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:57.459 18:19:42 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:57.459 18:19:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:57.459 18:19:42 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:57.459 18:19:42 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:57.459 18:19:42 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:57.459 18:19:42 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:57.459 18:19:42 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:57.459 18:19:42 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:57.459 18:19:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.459 18:19:42 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:57.459 18:19:42 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:57.459 18:19:42 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:57.459 18:19:42 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:57.459 18:19:42 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:57.459 18:19:42 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:57.459 18:19:42 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:57.459 18:19:42 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:57.459 18:19:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.459 18:19:42 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:57.459 18:19:42 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:57.459 18:19:42 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:57.459 18:19:42 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:57.459 18:19:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:57.459 MallocForNvmf0 00:04:57.718 18:19:43 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:57.718 18:19:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:57.718 MallocForNvmf1 00:04:57.718 18:19:43 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:57.718 18:19:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:57.976 [2024-07-15 18:19:43.389940] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:57.976 18:19:43 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:57.976 18:19:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:58.234 18:19:43 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:58.234 18:19:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:58.234 18:19:43 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:58.234 18:19:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:58.492 18:19:43 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:58.492 18:19:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:58.750 [2024-07-15 18:19:44.092106] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:58.750 18:19:44 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:58.750 18:19:44 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:58.750 18:19:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.750 18:19:44 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:58.750 18:19:44 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:58.750 18:19:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.751 18:19:44 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:58.751 18:19:44 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:58.751 18:19:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:59.009 MallocBdevForConfigChangeCheck 00:04:59.009 18:19:44 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:59.009 18:19:44 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:59.009 18:19:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.009 18:19:44 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:59.009 18:19:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:59.267 18:19:44 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:59.267 INFO: shutting down applications... 00:04:59.267 18:19:44 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:59.267 18:19:44 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:59.267 18:19:44 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:59.267 18:19:44 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:01.821 Calling clear_iscsi_subsystem 00:05:01.821 Calling clear_nvmf_subsystem 00:05:01.821 Calling clear_nbd_subsystem 00:05:01.821 Calling clear_ublk_subsystem 00:05:01.821 Calling clear_vhost_blk_subsystem 00:05:01.821 Calling clear_vhost_scsi_subsystem 00:05:01.821 Calling clear_bdev_subsystem 00:05:01.821 18:19:46 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:01.821 18:19:46 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:01.821 18:19:46 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:01.821 18:19:46 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:01.821 18:19:46 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:01.821 18:19:46 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:01.821 18:19:47 json_config -- json_config/json_config.sh@345 -- # break 00:05:01.821 18:19:47 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:01.821 18:19:47 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:01.821 18:19:47 json_config -- json_config/common.sh@31 -- # local app=target 00:05:01.821 18:19:47 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:01.821 18:19:47 json_config -- json_config/common.sh@35 -- # [[ -n 3734995 ]] 00:05:01.821 18:19:47 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3734995 00:05:01.821 18:19:47 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:01.821 18:19:47 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:01.821 18:19:47 json_config -- json_config/common.sh@41 -- # kill -0 3734995 00:05:01.821 18:19:47 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:02.388 18:19:47 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:02.388 18:19:47 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:02.388 18:19:47 json_config -- json_config/common.sh@41 -- # kill -0 3734995 00:05:02.388 18:19:47 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:02.388 18:19:47 json_config -- json_config/common.sh@43 -- # break 00:05:02.388 18:19:47 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:02.388 18:19:47 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:02.388 SPDK target shutdown done 00:05:02.388 18:19:47 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:02.388 INFO: relaunching applications... 00:05:02.388 18:19:47 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:02.388 18:19:47 json_config -- json_config/common.sh@9 -- # local app=target 00:05:02.388 18:19:47 json_config -- json_config/common.sh@10 -- # shift 00:05:02.388 18:19:47 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:02.388 18:19:47 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:02.388 18:19:47 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:02.388 18:19:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:02.388 18:19:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:02.388 18:19:47 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3736729 00:05:02.388 18:19:47 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:02.388 Waiting for target to run... 00:05:02.388 18:19:47 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:02.388 18:19:47 json_config -- json_config/common.sh@25 -- # waitforlisten 3736729 /var/tmp/spdk_tgt.sock 00:05:02.388 18:19:47 json_config -- common/autotest_common.sh@829 -- # '[' -z 3736729 ']' 00:05:02.388 18:19:47 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:02.388 18:19:47 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.388 18:19:47 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:02.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:02.388 18:19:47 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.388 18:19:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.388 [2024-07-15 18:19:47.763498] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:02.388 [2024-07-15 18:19:47.763553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3736729 ] 00:05:02.388 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.954 [2024-07-15 18:19:48.219140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.954 [2024-07-15 18:19:48.306126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.237 [2024-07-15 18:19:51.314233] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:06.237 [2024-07-15 18:19:51.346521] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:06.495 18:19:51 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:06.495 18:19:51 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:06.495 18:19:51 json_config -- json_config/common.sh@26 -- # echo '' 00:05:06.495 00:05:06.495 18:19:51 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:06.495 18:19:51 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:06.495 INFO: Checking if target configuration is the same... 00:05:06.495 18:19:51 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:06.495 18:19:51 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:06.495 18:19:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:06.495 + '[' 2 -ne 2 ']' 00:05:06.495 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:06.495 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:06.495 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:06.495 +++ basename /dev/fd/62 00:05:06.495 ++ mktemp /tmp/62.XXX 00:05:06.495 + tmp_file_1=/tmp/62.ksx 00:05:06.495 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:06.495 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:06.495 + tmp_file_2=/tmp/spdk_tgt_config.json.02Y 00:05:06.495 + ret=0 00:05:06.495 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:06.753 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:06.753 + diff -u /tmp/62.ksx /tmp/spdk_tgt_config.json.02Y 00:05:06.753 + echo 'INFO: JSON config files are the same' 00:05:06.753 INFO: JSON config files are the same 00:05:06.753 + rm /tmp/62.ksx /tmp/spdk_tgt_config.json.02Y 00:05:06.753 + exit 0 00:05:06.753 18:19:52 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:06.753 18:19:52 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:06.753 INFO: changing configuration and checking if this can be detected... 00:05:06.753 18:19:52 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:06.753 18:19:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:07.010 18:19:52 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:07.010 18:19:52 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:07.010 18:19:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:07.010 + '[' 2 -ne 2 ']' 00:05:07.010 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:07.010 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:07.010 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:07.010 +++ basename /dev/fd/62 00:05:07.010 ++ mktemp /tmp/62.XXX 00:05:07.010 + tmp_file_1=/tmp/62.ML5 00:05:07.010 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:07.010 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:07.010 + tmp_file_2=/tmp/spdk_tgt_config.json.XH3 00:05:07.010 + ret=0 00:05:07.010 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:07.268 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:07.526 + diff -u /tmp/62.ML5 /tmp/spdk_tgt_config.json.XH3 00:05:07.526 + ret=1 00:05:07.526 + echo '=== Start of file: /tmp/62.ML5 ===' 00:05:07.526 + cat /tmp/62.ML5 00:05:07.526 + echo '=== End of file: /tmp/62.ML5 ===' 00:05:07.526 + echo '' 00:05:07.526 + echo '=== Start of file: /tmp/spdk_tgt_config.json.XH3 ===' 00:05:07.526 + cat /tmp/spdk_tgt_config.json.XH3 00:05:07.526 + echo '=== End of file: /tmp/spdk_tgt_config.json.XH3 ===' 00:05:07.526 + echo '' 00:05:07.526 + rm /tmp/62.ML5 /tmp/spdk_tgt_config.json.XH3 00:05:07.526 + exit 1 00:05:07.526 18:19:52 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:07.526 INFO: configuration change detected. 00:05:07.526 18:19:52 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:07.526 18:19:52 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:07.526 18:19:52 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:07.526 18:19:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.526 18:19:52 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:07.526 18:19:52 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:07.526 18:19:52 json_config -- json_config/json_config.sh@317 -- # [[ -n 3736729 ]] 00:05:07.526 18:19:52 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:07.526 18:19:52 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:07.526 18:19:52 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:07.526 18:19:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.526 18:19:52 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:07.526 18:19:52 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:07.526 18:19:52 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:07.526 18:19:52 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:07.526 18:19:52 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:07.526 18:19:52 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:07.526 18:19:52 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:07.526 18:19:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.526 18:19:52 json_config -- json_config/json_config.sh@323 -- # killprocess 3736729 00:05:07.526 18:19:52 json_config -- common/autotest_common.sh@948 -- # '[' -z 3736729 ']' 00:05:07.526 18:19:52 json_config -- common/autotest_common.sh@952 -- # kill -0 3736729 00:05:07.526 18:19:52 json_config -- common/autotest_common.sh@953 -- # uname 00:05:07.526 18:19:52 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:07.526 18:19:52 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3736729 00:05:07.526 18:19:52 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:07.526 18:19:52 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:07.526 18:19:52 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3736729' 00:05:07.526 killing process with pid 3736729 00:05:07.526 18:19:52 json_config -- common/autotest_common.sh@967 -- # kill 3736729 00:05:07.526 18:19:52 json_config -- common/autotest_common.sh@972 -- # wait 3736729 00:05:09.426 18:19:54 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:09.426 18:19:54 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:09.426 18:19:54 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:09.426 18:19:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.684 18:19:54 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:09.684 18:19:54 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:09.684 INFO: Success 00:05:09.684 00:05:09.684 real 0m16.405s 00:05:09.684 user 0m17.281s 00:05:09.684 sys 0m1.890s 00:05:09.684 18:19:54 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.684 18:19:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.684 ************************************ 00:05:09.684 END TEST json_config 00:05:09.684 ************************************ 00:05:09.684 18:19:55 -- common/autotest_common.sh@1142 -- # return 0 00:05:09.684 18:19:55 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:09.684 18:19:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.684 18:19:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.684 18:19:55 -- common/autotest_common.sh@10 -- # set +x 00:05:09.684 ************************************ 00:05:09.684 START TEST json_config_extra_key 00:05:09.684 ************************************ 00:05:09.684 18:19:55 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:09.684 18:19:55 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:09.684 18:19:55 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:09.684 18:19:55 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:09.684 18:19:55 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:09.684 18:19:55 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:09.684 18:19:55 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:09.684 18:19:55 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:09.684 18:19:55 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:09.684 18:19:55 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:09.684 18:19:55 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:09.685 18:19:55 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:09.685 18:19:55 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:09.685 18:19:55 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:09.685 18:19:55 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:09.685 18:19:55 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:09.685 18:19:55 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:09.685 18:19:55 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:09.685 18:19:55 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:09.685 18:19:55 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:09.685 18:19:55 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:09.685 18:19:55 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:09.685 18:19:55 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:09.685 18:19:55 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.685 18:19:55 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.685 18:19:55 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.685 18:19:55 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:09.685 18:19:55 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.685 18:19:55 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:09.685 18:19:55 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:09.685 18:19:55 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:09.685 18:19:55 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:09.685 18:19:55 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:09.685 18:19:55 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:09.685 18:19:55 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:09.685 18:19:55 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:09.685 18:19:55 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:09.685 18:19:55 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:09.685 18:19:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:09.685 18:19:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:09.685 18:19:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:09.685 18:19:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:09.685 18:19:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:09.685 18:19:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:09.685 18:19:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:09.685 18:19:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:09.685 18:19:55 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:09.685 18:19:55 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:09.685 INFO: launching applications... 00:05:09.685 18:19:55 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:09.685 18:19:55 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:09.685 18:19:55 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:09.685 18:19:55 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:09.685 18:19:55 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:09.685 18:19:55 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:09.685 18:19:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.685 18:19:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.685 18:19:55 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3738002 00:05:09.685 18:19:55 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:09.685 Waiting for target to run... 00:05:09.685 18:19:55 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3738002 /var/tmp/spdk_tgt.sock 00:05:09.685 18:19:55 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 3738002 ']' 00:05:09.685 18:19:55 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:09.685 18:19:55 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:09.685 18:19:55 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.685 18:19:55 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:09.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:09.685 18:19:55 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.685 18:19:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:09.685 [2024-07-15 18:19:55.205870] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:09.685 [2024-07-15 18:19:55.205919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3738002 ] 00:05:09.685 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.252 [2024-07-15 18:19:55.650736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.252 [2024-07-15 18:19:55.736644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.510 18:19:55 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.510 18:19:55 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:10.510 18:19:55 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:10.510 00:05:10.510 18:19:55 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:10.510 INFO: shutting down applications... 00:05:10.510 18:19:56 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:10.510 18:19:56 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:10.511 18:19:56 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:10.511 18:19:56 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3738002 ]] 00:05:10.511 18:19:56 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3738002 00:05:10.511 18:19:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:10.511 18:19:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.511 18:19:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3738002 00:05:10.511 18:19:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:11.076 18:19:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:11.076 18:19:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.076 18:19:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3738002 00:05:11.076 18:19:56 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:11.076 18:19:56 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:11.076 18:19:56 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:11.076 18:19:56 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:11.076 SPDK target shutdown done 00:05:11.076 18:19:56 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:11.076 Success 00:05:11.076 00:05:11.076 real 0m1.453s 00:05:11.076 user 0m1.072s 00:05:11.076 sys 0m0.531s 00:05:11.076 18:19:56 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.076 18:19:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:11.076 ************************************ 00:05:11.076 END TEST json_config_extra_key 00:05:11.076 ************************************ 00:05:11.076 18:19:56 -- common/autotest_common.sh@1142 -- # return 0 00:05:11.076 18:19:56 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:11.076 18:19:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.076 18:19:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.076 18:19:56 -- common/autotest_common.sh@10 -- # set +x 00:05:11.076 ************************************ 00:05:11.076 START TEST alias_rpc 00:05:11.076 ************************************ 00:05:11.076 18:19:56 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:11.335 * Looking for test storage... 00:05:11.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:11.335 18:19:56 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:11.335 18:19:56 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3738287 00:05:11.335 18:19:56 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.335 18:19:56 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3738287 00:05:11.335 18:19:56 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 3738287 ']' 00:05:11.335 18:19:56 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.335 18:19:56 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.335 18:19:56 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.335 18:19:56 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.335 18:19:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.335 [2024-07-15 18:19:56.716412] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:11.335 [2024-07-15 18:19:56.716465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3738287 ] 00:05:11.335 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.335 [2024-07-15 18:19:56.783112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.335 [2024-07-15 18:19:56.857084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.292 18:19:57 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.292 18:19:57 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:12.292 18:19:57 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:12.292 18:19:57 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3738287 00:05:12.292 18:19:57 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 3738287 ']' 00:05:12.292 18:19:57 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 3738287 00:05:12.292 18:19:57 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:12.292 18:19:57 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.292 18:19:57 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3738287 00:05:12.292 18:19:57 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.292 18:19:57 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.292 18:19:57 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3738287' 00:05:12.292 killing process with pid 3738287 00:05:12.292 18:19:57 alias_rpc -- common/autotest_common.sh@967 -- # kill 3738287 00:05:12.292 18:19:57 alias_rpc -- common/autotest_common.sh@972 -- # wait 3738287 00:05:12.551 00:05:12.551 real 0m1.487s 00:05:12.551 user 0m1.615s 00:05:12.551 sys 0m0.408s 00:05:12.551 18:19:58 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.551 18:19:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.551 ************************************ 00:05:12.551 END TEST alias_rpc 00:05:12.551 ************************************ 00:05:12.551 18:19:58 -- common/autotest_common.sh@1142 -- # return 0 00:05:12.551 18:19:58 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:12.551 18:19:58 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:12.551 18:19:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.551 18:19:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.551 18:19:58 -- common/autotest_common.sh@10 -- # set +x 00:05:12.810 ************************************ 00:05:12.810 START TEST spdkcli_tcp 00:05:12.810 ************************************ 00:05:12.810 18:19:58 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:12.810 * Looking for test storage... 00:05:12.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:12.810 18:19:58 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:12.810 18:19:58 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:12.810 18:19:58 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:12.810 18:19:58 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:12.810 18:19:58 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:12.810 18:19:58 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:12.810 18:19:58 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:12.810 18:19:58 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:12.810 18:19:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:12.810 18:19:58 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3738574 00:05:12.810 18:19:58 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3738574 00:05:12.810 18:19:58 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:12.810 18:19:58 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 3738574 ']' 00:05:12.810 18:19:58 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.810 18:19:58 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.810 18:19:58 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.810 18:19:58 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.810 18:19:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:12.810 [2024-07-15 18:19:58.274235] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:12.810 [2024-07-15 18:19:58.274284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3738574 ] 00:05:12.810 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.810 [2024-07-15 18:19:58.341227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.069 [2024-07-15 18:19:58.414209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.069 [2024-07-15 18:19:58.414210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.635 18:19:59 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.635 18:19:59 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:13.635 18:19:59 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3738802 00:05:13.635 18:19:59 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:13.635 18:19:59 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:13.894 [ 00:05:13.894 "bdev_malloc_delete", 00:05:13.894 "bdev_malloc_create", 00:05:13.894 "bdev_null_resize", 00:05:13.894 "bdev_null_delete", 00:05:13.894 "bdev_null_create", 00:05:13.894 "bdev_nvme_cuse_unregister", 00:05:13.894 "bdev_nvme_cuse_register", 00:05:13.894 "bdev_opal_new_user", 00:05:13.894 "bdev_opal_set_lock_state", 00:05:13.894 "bdev_opal_delete", 00:05:13.894 "bdev_opal_get_info", 00:05:13.894 "bdev_opal_create", 00:05:13.894 "bdev_nvme_opal_revert", 00:05:13.894 "bdev_nvme_opal_init", 00:05:13.894 "bdev_nvme_send_cmd", 00:05:13.894 "bdev_nvme_get_path_iostat", 00:05:13.894 "bdev_nvme_get_mdns_discovery_info", 00:05:13.894 "bdev_nvme_stop_mdns_discovery", 00:05:13.894 "bdev_nvme_start_mdns_discovery", 00:05:13.894 "bdev_nvme_set_multipath_policy", 00:05:13.894 "bdev_nvme_set_preferred_path", 00:05:13.894 "bdev_nvme_get_io_paths", 00:05:13.894 "bdev_nvme_remove_error_injection", 00:05:13.894 "bdev_nvme_add_error_injection", 00:05:13.894 "bdev_nvme_get_discovery_info", 00:05:13.894 "bdev_nvme_stop_discovery", 00:05:13.894 "bdev_nvme_start_discovery", 00:05:13.894 "bdev_nvme_get_controller_health_info", 00:05:13.894 "bdev_nvme_disable_controller", 00:05:13.894 "bdev_nvme_enable_controller", 00:05:13.894 "bdev_nvme_reset_controller", 00:05:13.894 "bdev_nvme_get_transport_statistics", 00:05:13.894 "bdev_nvme_apply_firmware", 00:05:13.894 "bdev_nvme_detach_controller", 00:05:13.894 "bdev_nvme_get_controllers", 00:05:13.894 "bdev_nvme_attach_controller", 00:05:13.894 "bdev_nvme_set_hotplug", 00:05:13.894 "bdev_nvme_set_options", 00:05:13.894 "bdev_passthru_delete", 00:05:13.894 "bdev_passthru_create", 00:05:13.894 "bdev_lvol_set_parent_bdev", 00:05:13.894 "bdev_lvol_set_parent", 00:05:13.895 "bdev_lvol_check_shallow_copy", 00:05:13.895 "bdev_lvol_start_shallow_copy", 00:05:13.895 "bdev_lvol_grow_lvstore", 00:05:13.895 "bdev_lvol_get_lvols", 00:05:13.895 "bdev_lvol_get_lvstores", 00:05:13.895 "bdev_lvol_delete", 00:05:13.895 "bdev_lvol_set_read_only", 00:05:13.895 "bdev_lvol_resize", 00:05:13.895 "bdev_lvol_decouple_parent", 00:05:13.895 "bdev_lvol_inflate", 00:05:13.895 "bdev_lvol_rename", 00:05:13.895 "bdev_lvol_clone_bdev", 00:05:13.895 "bdev_lvol_clone", 00:05:13.895 "bdev_lvol_snapshot", 00:05:13.895 "bdev_lvol_create", 00:05:13.895 "bdev_lvol_delete_lvstore", 00:05:13.895 "bdev_lvol_rename_lvstore", 00:05:13.895 "bdev_lvol_create_lvstore", 00:05:13.895 "bdev_raid_set_options", 00:05:13.895 "bdev_raid_remove_base_bdev", 00:05:13.895 "bdev_raid_add_base_bdev", 00:05:13.895 "bdev_raid_delete", 00:05:13.895 "bdev_raid_create", 00:05:13.895 "bdev_raid_get_bdevs", 00:05:13.895 "bdev_error_inject_error", 00:05:13.895 "bdev_error_delete", 00:05:13.895 "bdev_error_create", 00:05:13.895 "bdev_split_delete", 00:05:13.895 "bdev_split_create", 00:05:13.895 "bdev_delay_delete", 00:05:13.895 "bdev_delay_create", 00:05:13.895 "bdev_delay_update_latency", 00:05:13.895 "bdev_zone_block_delete", 00:05:13.895 "bdev_zone_block_create", 00:05:13.895 "blobfs_create", 00:05:13.895 "blobfs_detect", 00:05:13.895 "blobfs_set_cache_size", 00:05:13.895 "bdev_aio_delete", 00:05:13.895 "bdev_aio_rescan", 00:05:13.895 "bdev_aio_create", 00:05:13.895 "bdev_ftl_set_property", 00:05:13.895 "bdev_ftl_get_properties", 00:05:13.895 "bdev_ftl_get_stats", 00:05:13.895 "bdev_ftl_unmap", 00:05:13.895 "bdev_ftl_unload", 00:05:13.895 "bdev_ftl_delete", 00:05:13.895 "bdev_ftl_load", 00:05:13.895 "bdev_ftl_create", 00:05:13.895 "bdev_virtio_attach_controller", 00:05:13.895 "bdev_virtio_scsi_get_devices", 00:05:13.895 "bdev_virtio_detach_controller", 00:05:13.895 "bdev_virtio_blk_set_hotplug", 00:05:13.895 "bdev_iscsi_delete", 00:05:13.895 "bdev_iscsi_create", 00:05:13.895 "bdev_iscsi_set_options", 00:05:13.895 "accel_error_inject_error", 00:05:13.895 "ioat_scan_accel_module", 00:05:13.895 "dsa_scan_accel_module", 00:05:13.895 "iaa_scan_accel_module", 00:05:13.895 "vfu_virtio_create_scsi_endpoint", 00:05:13.895 "vfu_virtio_scsi_remove_target", 00:05:13.895 "vfu_virtio_scsi_add_target", 00:05:13.895 "vfu_virtio_create_blk_endpoint", 00:05:13.895 "vfu_virtio_delete_endpoint", 00:05:13.895 "keyring_file_remove_key", 00:05:13.895 "keyring_file_add_key", 00:05:13.895 "keyring_linux_set_options", 00:05:13.895 "iscsi_get_histogram", 00:05:13.895 "iscsi_enable_histogram", 00:05:13.895 "iscsi_set_options", 00:05:13.895 "iscsi_get_auth_groups", 00:05:13.895 "iscsi_auth_group_remove_secret", 00:05:13.895 "iscsi_auth_group_add_secret", 00:05:13.895 "iscsi_delete_auth_group", 00:05:13.895 "iscsi_create_auth_group", 00:05:13.895 "iscsi_set_discovery_auth", 00:05:13.895 "iscsi_get_options", 00:05:13.895 "iscsi_target_node_request_logout", 00:05:13.895 "iscsi_target_node_set_redirect", 00:05:13.895 "iscsi_target_node_set_auth", 00:05:13.895 "iscsi_target_node_add_lun", 00:05:13.895 "iscsi_get_stats", 00:05:13.895 "iscsi_get_connections", 00:05:13.895 "iscsi_portal_group_set_auth", 00:05:13.895 "iscsi_start_portal_group", 00:05:13.895 "iscsi_delete_portal_group", 00:05:13.895 "iscsi_create_portal_group", 00:05:13.895 "iscsi_get_portal_groups", 00:05:13.895 "iscsi_delete_target_node", 00:05:13.895 "iscsi_target_node_remove_pg_ig_maps", 00:05:13.895 "iscsi_target_node_add_pg_ig_maps", 00:05:13.895 "iscsi_create_target_node", 00:05:13.895 "iscsi_get_target_nodes", 00:05:13.895 "iscsi_delete_initiator_group", 00:05:13.895 "iscsi_initiator_group_remove_initiators", 00:05:13.895 "iscsi_initiator_group_add_initiators", 00:05:13.895 "iscsi_create_initiator_group", 00:05:13.895 "iscsi_get_initiator_groups", 00:05:13.895 "nvmf_set_crdt", 00:05:13.895 "nvmf_set_config", 00:05:13.895 "nvmf_set_max_subsystems", 00:05:13.895 "nvmf_stop_mdns_prr", 00:05:13.895 "nvmf_publish_mdns_prr", 00:05:13.895 "nvmf_subsystem_get_listeners", 00:05:13.895 "nvmf_subsystem_get_qpairs", 00:05:13.895 "nvmf_subsystem_get_controllers", 00:05:13.895 "nvmf_get_stats", 00:05:13.895 "nvmf_get_transports", 00:05:13.895 "nvmf_create_transport", 00:05:13.895 "nvmf_get_targets", 00:05:13.895 "nvmf_delete_target", 00:05:13.895 "nvmf_create_target", 00:05:13.895 "nvmf_subsystem_allow_any_host", 00:05:13.895 "nvmf_subsystem_remove_host", 00:05:13.895 "nvmf_subsystem_add_host", 00:05:13.895 "nvmf_ns_remove_host", 00:05:13.895 "nvmf_ns_add_host", 00:05:13.895 "nvmf_subsystem_remove_ns", 00:05:13.895 "nvmf_subsystem_add_ns", 00:05:13.895 "nvmf_subsystem_listener_set_ana_state", 00:05:13.895 "nvmf_discovery_get_referrals", 00:05:13.895 "nvmf_discovery_remove_referral", 00:05:13.895 "nvmf_discovery_add_referral", 00:05:13.895 "nvmf_subsystem_remove_listener", 00:05:13.895 "nvmf_subsystem_add_listener", 00:05:13.895 "nvmf_delete_subsystem", 00:05:13.895 "nvmf_create_subsystem", 00:05:13.895 "nvmf_get_subsystems", 00:05:13.895 "env_dpdk_get_mem_stats", 00:05:13.895 "nbd_get_disks", 00:05:13.895 "nbd_stop_disk", 00:05:13.895 "nbd_start_disk", 00:05:13.895 "ublk_recover_disk", 00:05:13.895 "ublk_get_disks", 00:05:13.895 "ublk_stop_disk", 00:05:13.895 "ublk_start_disk", 00:05:13.895 "ublk_destroy_target", 00:05:13.895 "ublk_create_target", 00:05:13.895 "virtio_blk_create_transport", 00:05:13.895 "virtio_blk_get_transports", 00:05:13.895 "vhost_controller_set_coalescing", 00:05:13.895 "vhost_get_controllers", 00:05:13.895 "vhost_delete_controller", 00:05:13.895 "vhost_create_blk_controller", 00:05:13.895 "vhost_scsi_controller_remove_target", 00:05:13.895 "vhost_scsi_controller_add_target", 00:05:13.895 "vhost_start_scsi_controller", 00:05:13.895 "vhost_create_scsi_controller", 00:05:13.895 "thread_set_cpumask", 00:05:13.895 "framework_get_governor", 00:05:13.895 "framework_get_scheduler", 00:05:13.895 "framework_set_scheduler", 00:05:13.895 "framework_get_reactors", 00:05:13.895 "thread_get_io_channels", 00:05:13.895 "thread_get_pollers", 00:05:13.895 "thread_get_stats", 00:05:13.895 "framework_monitor_context_switch", 00:05:13.895 "spdk_kill_instance", 00:05:13.895 "log_enable_timestamps", 00:05:13.895 "log_get_flags", 00:05:13.895 "log_clear_flag", 00:05:13.895 "log_set_flag", 00:05:13.895 "log_get_level", 00:05:13.895 "log_set_level", 00:05:13.895 "log_get_print_level", 00:05:13.895 "log_set_print_level", 00:05:13.895 "framework_enable_cpumask_locks", 00:05:13.895 "framework_disable_cpumask_locks", 00:05:13.895 "framework_wait_init", 00:05:13.895 "framework_start_init", 00:05:13.895 "scsi_get_devices", 00:05:13.895 "bdev_get_histogram", 00:05:13.895 "bdev_enable_histogram", 00:05:13.895 "bdev_set_qos_limit", 00:05:13.895 "bdev_set_qd_sampling_period", 00:05:13.895 "bdev_get_bdevs", 00:05:13.895 "bdev_reset_iostat", 00:05:13.895 "bdev_get_iostat", 00:05:13.895 "bdev_examine", 00:05:13.895 "bdev_wait_for_examine", 00:05:13.895 "bdev_set_options", 00:05:13.895 "notify_get_notifications", 00:05:13.895 "notify_get_types", 00:05:13.895 "accel_get_stats", 00:05:13.895 "accel_set_options", 00:05:13.895 "accel_set_driver", 00:05:13.895 "accel_crypto_key_destroy", 00:05:13.895 "accel_crypto_keys_get", 00:05:13.895 "accel_crypto_key_create", 00:05:13.895 "accel_assign_opc", 00:05:13.895 "accel_get_module_info", 00:05:13.895 "accel_get_opc_assignments", 00:05:13.895 "vmd_rescan", 00:05:13.895 "vmd_remove_device", 00:05:13.895 "vmd_enable", 00:05:13.895 "sock_get_default_impl", 00:05:13.895 "sock_set_default_impl", 00:05:13.895 "sock_impl_set_options", 00:05:13.895 "sock_impl_get_options", 00:05:13.895 "iobuf_get_stats", 00:05:13.895 "iobuf_set_options", 00:05:13.895 "keyring_get_keys", 00:05:13.895 "framework_get_pci_devices", 00:05:13.895 "framework_get_config", 00:05:13.895 "framework_get_subsystems", 00:05:13.895 "vfu_tgt_set_base_path", 00:05:13.895 "trace_get_info", 00:05:13.895 "trace_get_tpoint_group_mask", 00:05:13.895 "trace_disable_tpoint_group", 00:05:13.895 "trace_enable_tpoint_group", 00:05:13.895 "trace_clear_tpoint_mask", 00:05:13.895 "trace_set_tpoint_mask", 00:05:13.895 "spdk_get_version", 00:05:13.895 "rpc_get_methods" 00:05:13.895 ] 00:05:13.895 18:19:59 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:13.895 18:19:59 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:13.895 18:19:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:13.895 18:19:59 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:13.895 18:19:59 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3738574 00:05:13.895 18:19:59 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 3738574 ']' 00:05:13.895 18:19:59 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 3738574 00:05:13.895 18:19:59 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:13.895 18:19:59 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:13.895 18:19:59 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3738574 00:05:13.895 18:19:59 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:13.895 18:19:59 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:13.895 18:19:59 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3738574' 00:05:13.895 killing process with pid 3738574 00:05:13.895 18:19:59 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 3738574 00:05:13.895 18:19:59 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 3738574 00:05:14.154 00:05:14.154 real 0m1.515s 00:05:14.154 user 0m2.832s 00:05:14.154 sys 0m0.415s 00:05:14.154 18:19:59 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.154 18:19:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:14.154 ************************************ 00:05:14.154 END TEST spdkcli_tcp 00:05:14.154 ************************************ 00:05:14.154 18:19:59 -- common/autotest_common.sh@1142 -- # return 0 00:05:14.154 18:19:59 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:14.154 18:19:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.154 18:19:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.154 18:19:59 -- common/autotest_common.sh@10 -- # set +x 00:05:14.412 ************************************ 00:05:14.412 START TEST dpdk_mem_utility 00:05:14.412 ************************************ 00:05:14.412 18:19:59 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:14.412 * Looking for test storage... 00:05:14.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:14.412 18:19:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:14.412 18:19:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3738967 00:05:14.412 18:19:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3738967 00:05:14.412 18:19:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.412 18:19:59 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 3738967 ']' 00:05:14.412 18:19:59 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.412 18:19:59 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.412 18:19:59 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.413 18:19:59 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.413 18:19:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:14.413 [2024-07-15 18:19:59.856879] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:14.413 [2024-07-15 18:19:59.856936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3738967 ] 00:05:14.413 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.413 [2024-07-15 18:19:59.921694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.671 [2024-07-15 18:20:00.000534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.238 18:20:00 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.238 18:20:00 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:15.238 18:20:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:15.238 18:20:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:15.238 18:20:00 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.238 18:20:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:15.238 { 00:05:15.238 "filename": "/tmp/spdk_mem_dump.txt" 00:05:15.238 } 00:05:15.238 18:20:00 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.238 18:20:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:15.238 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:15.238 1 heaps totaling size 814.000000 MiB 00:05:15.238 size: 814.000000 MiB heap id: 0 00:05:15.238 end heaps---------- 00:05:15.238 8 mempools totaling size 598.116089 MiB 00:05:15.238 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:15.238 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:15.238 size: 84.521057 MiB name: bdev_io_3738967 00:05:15.238 size: 51.011292 MiB name: evtpool_3738967 00:05:15.238 size: 50.003479 MiB name: msgpool_3738967 00:05:15.239 size: 21.763794 MiB name: PDU_Pool 00:05:15.239 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:15.239 size: 0.026123 MiB name: Session_Pool 00:05:15.239 end mempools------- 00:05:15.239 6 memzones totaling size 4.142822 MiB 00:05:15.239 size: 1.000366 MiB name: RG_ring_0_3738967 00:05:15.239 size: 1.000366 MiB name: RG_ring_1_3738967 00:05:15.239 size: 1.000366 MiB name: RG_ring_4_3738967 00:05:15.239 size: 1.000366 MiB name: RG_ring_5_3738967 00:05:15.239 size: 0.125366 MiB name: RG_ring_2_3738967 00:05:15.239 size: 0.015991 MiB name: RG_ring_3_3738967 00:05:15.239 end memzones------- 00:05:15.239 18:20:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:15.239 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:15.239 list of free elements. size: 12.519348 MiB 00:05:15.239 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:15.239 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:15.239 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:15.239 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:15.239 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:15.239 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:15.239 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:15.239 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:15.239 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:15.239 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:15.239 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:15.239 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:15.239 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:15.239 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:15.239 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:15.239 list of standard malloc elements. size: 199.218079 MiB 00:05:15.239 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:15.239 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:15.239 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:15.239 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:15.239 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:15.239 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:15.239 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:15.239 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:15.239 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:15.239 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:15.239 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:15.239 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:15.239 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:15.239 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:15.239 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:15.239 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:15.239 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:15.239 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:15.239 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:15.239 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:15.239 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:15.239 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:15.239 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:15.239 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:15.239 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:15.239 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:15.239 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:15.239 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:15.239 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:15.239 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:15.239 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:15.239 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:15.239 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:15.239 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:15.239 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:15.239 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:15.239 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:15.239 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:15.239 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:15.239 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:15.239 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:15.239 list of memzone associated elements. size: 602.262573 MiB 00:05:15.239 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:15.239 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:15.239 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:15.239 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:15.239 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:15.239 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3738967_0 00:05:15.239 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:15.239 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3738967_0 00:05:15.239 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:15.239 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3738967_0 00:05:15.239 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:15.239 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:15.239 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:15.239 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:15.239 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:15.239 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3738967 00:05:15.239 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:15.239 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3738967 00:05:15.239 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:15.239 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3738967 00:05:15.239 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:15.239 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:15.239 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:15.239 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:15.239 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:15.239 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:15.239 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:15.239 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:15.239 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:15.239 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3738967 00:05:15.239 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:15.239 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3738967 00:05:15.239 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:15.239 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3738967 00:05:15.239 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:15.239 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3738967 00:05:15.239 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:15.239 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3738967 00:05:15.239 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:15.239 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:15.239 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:15.239 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:15.239 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:15.239 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:15.239 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:15.239 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3738967 00:05:15.239 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:15.239 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:15.239 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:15.239 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:15.239 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:15.239 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3738967 00:05:15.239 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:15.239 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:15.239 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:15.239 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3738967 00:05:15.239 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:15.239 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3738967 00:05:15.239 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:15.239 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:15.239 18:20:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:15.239 18:20:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3738967 00:05:15.239 18:20:00 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 3738967 ']' 00:05:15.239 18:20:00 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 3738967 00:05:15.239 18:20:00 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:15.239 18:20:00 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:15.239 18:20:00 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3738967 00:05:15.498 18:20:00 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:15.498 18:20:00 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:15.498 18:20:00 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3738967' 00:05:15.498 killing process with pid 3738967 00:05:15.498 18:20:00 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 3738967 00:05:15.498 18:20:00 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 3738967 00:05:15.756 00:05:15.756 real 0m1.389s 00:05:15.756 user 0m1.437s 00:05:15.756 sys 0m0.413s 00:05:15.756 18:20:01 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.756 18:20:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:15.756 ************************************ 00:05:15.756 END TEST dpdk_mem_utility 00:05:15.756 ************************************ 00:05:15.756 18:20:01 -- common/autotest_common.sh@1142 -- # return 0 00:05:15.756 18:20:01 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:15.756 18:20:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.756 18:20:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.756 18:20:01 -- common/autotest_common.sh@10 -- # set +x 00:05:15.756 ************************************ 00:05:15.756 START TEST event 00:05:15.756 ************************************ 00:05:15.756 18:20:01 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:15.756 * Looking for test storage... 00:05:15.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:15.756 18:20:01 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:15.756 18:20:01 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:15.756 18:20:01 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:15.756 18:20:01 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:15.756 18:20:01 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.756 18:20:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.756 ************************************ 00:05:15.756 START TEST event_perf 00:05:15.756 ************************************ 00:05:15.756 18:20:01 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:16.015 Running I/O for 1 seconds...[2024-07-15 18:20:01.314445] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:16.015 [2024-07-15 18:20:01.314514] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3739313 ] 00:05:16.015 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.015 [2024-07-15 18:20:01.387522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:16.015 [2024-07-15 18:20:01.462135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.015 [2024-07-15 18:20:01.462243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.015 [2024-07-15 18:20:01.462367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.015 [2024-07-15 18:20:01.462368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:17.391 Running I/O for 1 seconds... 00:05:17.391 lcore 0: 208190 00:05:17.391 lcore 1: 208189 00:05:17.391 lcore 2: 208189 00:05:17.391 lcore 3: 208190 00:05:17.391 done. 00:05:17.391 00:05:17.391 real 0m1.239s 00:05:17.391 user 0m4.156s 00:05:17.391 sys 0m0.081s 00:05:17.391 18:20:02 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.391 18:20:02 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:17.391 ************************************ 00:05:17.391 END TEST event_perf 00:05:17.391 ************************************ 00:05:17.391 18:20:02 event -- common/autotest_common.sh@1142 -- # return 0 00:05:17.391 18:20:02 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:17.391 18:20:02 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:17.391 18:20:02 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.391 18:20:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.391 ************************************ 00:05:17.391 START TEST event_reactor 00:05:17.391 ************************************ 00:05:17.391 18:20:02 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:17.391 [2024-07-15 18:20:02.621412] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:17.391 [2024-07-15 18:20:02.621480] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3739534 ] 00:05:17.391 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.391 [2024-07-15 18:20:02.671965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.391 [2024-07-15 18:20:02.745137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.325 test_start 00:05:18.325 oneshot 00:05:18.325 tick 100 00:05:18.325 tick 100 00:05:18.325 tick 250 00:05:18.325 tick 100 00:05:18.325 tick 100 00:05:18.325 tick 100 00:05:18.325 tick 250 00:05:18.325 tick 500 00:05:18.325 tick 100 00:05:18.325 tick 100 00:05:18.325 tick 250 00:05:18.325 tick 100 00:05:18.325 tick 100 00:05:18.325 test_end 00:05:18.325 00:05:18.325 real 0m1.212s 00:05:18.325 user 0m1.135s 00:05:18.325 sys 0m0.073s 00:05:18.325 18:20:03 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.325 18:20:03 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:18.325 ************************************ 00:05:18.325 END TEST event_reactor 00:05:18.325 ************************************ 00:05:18.325 18:20:03 event -- common/autotest_common.sh@1142 -- # return 0 00:05:18.325 18:20:03 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:18.325 18:20:03 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:18.325 18:20:03 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.325 18:20:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.584 ************************************ 00:05:18.584 START TEST event_reactor_perf 00:05:18.584 ************************************ 00:05:18.584 18:20:03 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:18.584 [2024-07-15 18:20:03.904555] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:18.584 [2024-07-15 18:20:03.904624] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3739721 ] 00:05:18.584 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.584 [2024-07-15 18:20:03.958671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.584 [2024-07-15 18:20:04.031300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.961 test_start 00:05:19.961 test_end 00:05:19.961 Performance: 525597 events per second 00:05:19.961 00:05:19.961 real 0m1.216s 00:05:19.961 user 0m1.139s 00:05:19.961 sys 0m0.073s 00:05:19.961 18:20:05 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.961 18:20:05 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:19.961 ************************************ 00:05:19.961 END TEST event_reactor_perf 00:05:19.961 ************************************ 00:05:19.961 18:20:05 event -- common/autotest_common.sh@1142 -- # return 0 00:05:19.961 18:20:05 event -- event/event.sh@49 -- # uname -s 00:05:19.961 18:20:05 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:19.961 18:20:05 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:19.961 18:20:05 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.961 18:20:05 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.961 18:20:05 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.961 ************************************ 00:05:19.961 START TEST event_scheduler 00:05:19.961 ************************************ 00:05:19.961 18:20:05 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:19.961 * Looking for test storage... 00:05:19.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:19.961 18:20:05 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:19.961 18:20:05 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3739998 00:05:19.961 18:20:05 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:19.961 18:20:05 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.961 18:20:05 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3739998 00:05:19.961 18:20:05 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 3739998 ']' 00:05:19.961 18:20:05 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.961 18:20:05 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.962 18:20:05 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.962 18:20:05 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.962 18:20:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:19.962 [2024-07-15 18:20:05.303522] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:19.962 [2024-07-15 18:20:05.303572] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3739998 ] 00:05:19.962 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.962 [2024-07-15 18:20:05.373131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:19.962 [2024-07-15 18:20:05.453988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.962 [2024-07-15 18:20:05.454096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.962 [2024-07-15 18:20:05.454196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:19.962 [2024-07-15 18:20:05.454197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.904 18:20:06 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.904 18:20:06 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:20.904 18:20:06 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:20.904 18:20:06 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.904 18:20:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.904 [2024-07-15 18:20:06.116515] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:20.904 [2024-07-15 18:20:06.116532] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:20.904 [2024-07-15 18:20:06.116541] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:20.904 [2024-07-15 18:20:06.116546] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:20.904 [2024-07-15 18:20:06.116551] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:20.904 18:20:06 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.904 18:20:06 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:20.904 18:20:06 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.904 18:20:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.904 [2024-07-15 18:20:06.187665] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:20.904 18:20:06 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.904 18:20:06 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:20.904 18:20:06 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.904 18:20:06 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.904 18:20:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.904 ************************************ 00:05:20.904 START TEST scheduler_create_thread 00:05:20.904 ************************************ 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.904 2 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.904 3 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.904 4 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.904 5 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.904 6 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.904 7 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.904 8 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.904 9 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.904 10 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.904 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.470 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.470 18:20:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:21.470 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:21.470 18:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.845 18:20:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.845 18:20:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:22.845 18:20:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:22.845 18:20:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.845 18:20:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.834 18:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.834 00:05:23.834 real 0m3.099s 00:05:23.834 user 0m0.025s 00:05:23.834 sys 0m0.004s 00:05:23.834 18:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.834 18:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.834 ************************************ 00:05:23.834 END TEST scheduler_create_thread 00:05:23.834 ************************************ 00:05:23.834 18:20:09 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:23.834 18:20:09 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:23.834 18:20:09 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3739998 00:05:23.834 18:20:09 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 3739998 ']' 00:05:23.834 18:20:09 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 3739998 00:05:23.834 18:20:09 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:23.834 18:20:09 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.834 18:20:09 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3739998 00:05:24.093 18:20:09 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:24.093 18:20:09 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:24.093 18:20:09 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3739998' 00:05:24.093 killing process with pid 3739998 00:05:24.093 18:20:09 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 3739998 00:05:24.093 18:20:09 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 3739998 00:05:24.352 [2024-07-15 18:20:09.702789] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:24.611 00:05:24.611 real 0m4.741s 00:05:24.611 user 0m9.203s 00:05:24.612 sys 0m0.366s 00:05:24.612 18:20:09 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.612 18:20:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.612 ************************************ 00:05:24.612 END TEST event_scheduler 00:05:24.612 ************************************ 00:05:24.612 18:20:09 event -- common/autotest_common.sh@1142 -- # return 0 00:05:24.612 18:20:09 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:24.612 18:20:09 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:24.612 18:20:09 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.612 18:20:09 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.612 18:20:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.612 ************************************ 00:05:24.612 START TEST app_repeat 00:05:24.612 ************************************ 00:05:24.612 18:20:09 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:24.612 18:20:09 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.612 18:20:09 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.612 18:20:09 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:24.612 18:20:09 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.612 18:20:09 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:24.612 18:20:09 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:24.612 18:20:09 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:24.612 18:20:09 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3740910 00:05:24.612 18:20:09 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.612 18:20:09 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:24.612 18:20:09 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3740910' 00:05:24.612 Process app_repeat pid: 3740910 00:05:24.612 18:20:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:24.612 18:20:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:24.612 spdk_app_start Round 0 00:05:24.612 18:20:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3740910 /var/tmp/spdk-nbd.sock 00:05:24.612 18:20:09 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3740910 ']' 00:05:24.612 18:20:09 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:24.612 18:20:09 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.612 18:20:09 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:24.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:24.612 18:20:09 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.612 18:20:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:24.612 [2024-07-15 18:20:10.018742] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:24.612 [2024-07-15 18:20:10.018790] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3740910 ] 00:05:24.612 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.612 [2024-07-15 18:20:10.086000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.612 [2024-07-15 18:20:10.163824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.612 [2024-07-15 18:20:10.163830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.547 18:20:10 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.547 18:20:10 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:25.547 18:20:10 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.547 Malloc0 00:05:25.547 18:20:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.807 Malloc1 00:05:25.807 18:20:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.807 18:20:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.807 18:20:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.807 18:20:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:25.807 18:20:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.807 18:20:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:25.807 18:20:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.807 18:20:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.807 18:20:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.807 18:20:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:25.807 18:20:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.807 18:20:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:25.807 18:20:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:25.807 18:20:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:25.807 18:20:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.807 18:20:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:26.066 /dev/nbd0 00:05:26.066 18:20:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:26.066 18:20:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:26.066 18:20:11 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:26.066 18:20:11 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:26.066 18:20:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:26.066 18:20:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:26.066 18:20:11 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:26.066 18:20:11 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:26.066 18:20:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:26.066 18:20:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:26.066 18:20:11 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.066 1+0 records in 00:05:26.066 1+0 records out 00:05:26.066 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190468 s, 21.5 MB/s 00:05:26.066 18:20:11 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:26.066 18:20:11 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:26.066 18:20:11 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:26.066 18:20:11 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:26.066 18:20:11 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:26.066 18:20:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.066 18:20:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.066 18:20:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:26.066 /dev/nbd1 00:05:26.066 18:20:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:26.066 18:20:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:26.066 18:20:11 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:26.066 18:20:11 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:26.066 18:20:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:26.066 18:20:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:26.066 18:20:11 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:26.066 18:20:11 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:26.066 18:20:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:26.066 18:20:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:26.066 18:20:11 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.066 1+0 records in 00:05:26.066 1+0 records out 00:05:26.066 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201044 s, 20.4 MB/s 00:05:26.066 18:20:11 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:26.066 18:20:11 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:26.066 18:20:11 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:26.066 18:20:11 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:26.066 18:20:11 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:26.066 18:20:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.066 18:20:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.066 18:20:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.066 18:20:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.066 18:20:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.325 18:20:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:26.325 { 00:05:26.325 "nbd_device": "/dev/nbd0", 00:05:26.325 "bdev_name": "Malloc0" 00:05:26.325 }, 00:05:26.325 { 00:05:26.325 "nbd_device": "/dev/nbd1", 00:05:26.325 "bdev_name": "Malloc1" 00:05:26.325 } 00:05:26.325 ]' 00:05:26.325 18:20:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:26.325 { 00:05:26.325 "nbd_device": "/dev/nbd0", 00:05:26.325 "bdev_name": "Malloc0" 00:05:26.325 }, 00:05:26.325 { 00:05:26.325 "nbd_device": "/dev/nbd1", 00:05:26.325 "bdev_name": "Malloc1" 00:05:26.325 } 00:05:26.325 ]' 00:05:26.325 18:20:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.325 18:20:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:26.325 /dev/nbd1' 00:05:26.325 18:20:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.325 18:20:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:26.325 /dev/nbd1' 00:05:26.325 18:20:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:26.325 18:20:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:26.325 18:20:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:26.325 18:20:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:26.325 18:20:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:26.325 18:20:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.325 18:20:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.325 18:20:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:26.325 18:20:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:26.325 18:20:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:26.326 18:20:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:26.326 256+0 records in 00:05:26.326 256+0 records out 00:05:26.326 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103512 s, 101 MB/s 00:05:26.326 18:20:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.326 18:20:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:26.326 256+0 records in 00:05:26.326 256+0 records out 00:05:26.326 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133371 s, 78.6 MB/s 00:05:26.326 18:20:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.326 18:20:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:26.585 256+0 records in 00:05:26.585 256+0 records out 00:05:26.585 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153994 s, 68.1 MB/s 00:05:26.585 18:20:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:26.585 18:20:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.585 18:20:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.585 18:20:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:26.585 18:20:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:26.585 18:20:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:26.585 18:20:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:26.585 18:20:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.585 18:20:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:26.585 18:20:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.585 18:20:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:26.585 18:20:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:26.585 18:20:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:26.585 18:20:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.585 18:20:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.585 18:20:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:26.585 18:20:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:26.585 18:20:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.585 18:20:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:26.585 18:20:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:26.585 18:20:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:26.585 18:20:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:26.585 18:20:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.585 18:20:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.585 18:20:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:26.585 18:20:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.585 18:20:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.585 18:20:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.585 18:20:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:26.843 18:20:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:26.843 18:20:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:26.843 18:20:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:26.843 18:20:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.843 18:20:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.843 18:20:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:26.843 18:20:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.843 18:20:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.844 18:20:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.844 18:20:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.844 18:20:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.102 18:20:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:27.102 18:20:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.102 18:20:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:27.102 18:20:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:27.102 18:20:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:27.102 18:20:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.102 18:20:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:27.102 18:20:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:27.102 18:20:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:27.102 18:20:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:27.102 18:20:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:27.102 18:20:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:27.102 18:20:12 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:27.361 18:20:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:27.619 [2024-07-15 18:20:12.926875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.619 [2024-07-15 18:20:12.992997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.619 [2024-07-15 18:20:12.992998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.619 [2024-07-15 18:20:13.033198] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:27.619 [2024-07-15 18:20:13.033236] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:30.914 18:20:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:30.914 18:20:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:30.914 spdk_app_start Round 1 00:05:30.914 18:20:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3740910 /var/tmp/spdk-nbd.sock 00:05:30.914 18:20:15 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3740910 ']' 00:05:30.914 18:20:15 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:30.914 18:20:15 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.914 18:20:15 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:30.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:30.914 18:20:15 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.914 18:20:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:30.914 18:20:15 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.914 18:20:15 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:30.914 18:20:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.914 Malloc0 00:05:30.914 18:20:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.914 Malloc1 00:05:30.914 18:20:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.914 18:20:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.914 18:20:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.914 18:20:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:30.914 18:20:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.914 18:20:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:30.914 18:20:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.914 18:20:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.914 18:20:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.914 18:20:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:30.914 18:20:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.914 18:20:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:30.914 18:20:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:30.914 18:20:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:30.914 18:20:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.914 18:20:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:31.174 /dev/nbd0 00:05:31.174 18:20:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:31.174 18:20:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:31.174 18:20:16 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:31.174 18:20:16 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:31.174 18:20:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:31.174 18:20:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:31.174 18:20:16 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:31.174 18:20:16 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:31.174 18:20:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:31.174 18:20:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:31.174 18:20:16 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.174 1+0 records in 00:05:31.174 1+0 records out 00:05:31.174 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184624 s, 22.2 MB/s 00:05:31.174 18:20:16 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:31.174 18:20:16 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:31.174 18:20:16 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:31.174 18:20:16 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:31.174 18:20:16 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:31.174 18:20:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.174 18:20:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.174 18:20:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:31.174 /dev/nbd1 00:05:31.174 18:20:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:31.174 18:20:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:31.174 18:20:16 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:31.174 18:20:16 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:31.174 18:20:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:31.174 18:20:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:31.174 18:20:16 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:31.174 18:20:16 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:31.174 18:20:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:31.175 18:20:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:31.175 18:20:16 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.433 1+0 records in 00:05:31.433 1+0 records out 00:05:31.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241729 s, 16.9 MB/s 00:05:31.433 18:20:16 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:31.433 18:20:16 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:31.433 18:20:16 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:31.433 18:20:16 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:31.433 18:20:16 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:31.433 18:20:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.433 18:20:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.433 18:20:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.433 18:20:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.433 18:20:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.434 18:20:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:31.434 { 00:05:31.434 "nbd_device": "/dev/nbd0", 00:05:31.434 "bdev_name": "Malloc0" 00:05:31.434 }, 00:05:31.434 { 00:05:31.434 "nbd_device": "/dev/nbd1", 00:05:31.434 "bdev_name": "Malloc1" 00:05:31.434 } 00:05:31.434 ]' 00:05:31.434 18:20:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:31.434 { 00:05:31.434 "nbd_device": "/dev/nbd0", 00:05:31.434 "bdev_name": "Malloc0" 00:05:31.434 }, 00:05:31.434 { 00:05:31.434 "nbd_device": "/dev/nbd1", 00:05:31.434 "bdev_name": "Malloc1" 00:05:31.434 } 00:05:31.434 ]' 00:05:31.434 18:20:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.434 18:20:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:31.434 /dev/nbd1' 00:05:31.434 18:20:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:31.434 /dev/nbd1' 00:05:31.434 18:20:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.434 18:20:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:31.434 18:20:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:31.434 18:20:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:31.434 18:20:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:31.434 18:20:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:31.434 18:20:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.434 18:20:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.434 18:20:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:31.434 18:20:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.434 18:20:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:31.434 18:20:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:31.434 256+0 records in 00:05:31.434 256+0 records out 00:05:31.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103877 s, 101 MB/s 00:05:31.434 18:20:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.434 18:20:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:31.692 256+0 records in 00:05:31.692 256+0 records out 00:05:31.692 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138986 s, 75.4 MB/s 00:05:31.692 18:20:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.692 18:20:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:31.692 256+0 records in 00:05:31.692 256+0 records out 00:05:31.692 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147442 s, 71.1 MB/s 00:05:31.692 18:20:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:31.693 18:20:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.693 18:20:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.693 18:20:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:31.693 18:20:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.693 18:20:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:31.693 18:20:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:31.693 18:20:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.693 18:20:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:31.693 18:20:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.693 18:20:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:31.693 18:20:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.693 18:20:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:31.693 18:20:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.693 18:20:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.693 18:20:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:31.693 18:20:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:31.693 18:20:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.693 18:20:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:31.693 18:20:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:31.693 18:20:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:31.693 18:20:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:31.693 18:20:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.693 18:20:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.693 18:20:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:31.693 18:20:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.693 18:20:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.693 18:20:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.693 18:20:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.951 18:20:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.951 18:20:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.951 18:20:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.951 18:20:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.951 18:20:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.951 18:20:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:31.951 18:20:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.951 18:20:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.951 18:20:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.951 18:20:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.951 18:20:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.210 18:20:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:32.210 18:20:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:32.210 18:20:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.210 18:20:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:32.210 18:20:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:32.210 18:20:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.210 18:20:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:32.210 18:20:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:32.210 18:20:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:32.210 18:20:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:32.210 18:20:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:32.210 18:20:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:32.210 18:20:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:32.501 18:20:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:32.501 [2024-07-15 18:20:18.016598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.759 [2024-07-15 18:20:18.084443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.759 [2024-07-15 18:20:18.084444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.759 [2024-07-15 18:20:18.125578] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:32.759 [2024-07-15 18:20:18.125619] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:35.292 18:20:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:35.292 18:20:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:35.292 spdk_app_start Round 2 00:05:35.292 18:20:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3740910 /var/tmp/spdk-nbd.sock 00:05:35.292 18:20:20 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3740910 ']' 00:05:35.292 18:20:20 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.292 18:20:20 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.292 18:20:20 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.292 18:20:20 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.292 18:20:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.550 18:20:21 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.551 18:20:21 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:35.551 18:20:21 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.810 Malloc0 00:05:35.810 18:20:21 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.810 Malloc1 00:05:36.069 18:20:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.069 18:20:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.069 18:20:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.069 18:20:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:36.069 18:20:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.069 18:20:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:36.069 18:20:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.069 18:20:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.069 18:20:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.069 18:20:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:36.069 18:20:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.069 18:20:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:36.069 18:20:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:36.069 18:20:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:36.069 18:20:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.069 18:20:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.069 /dev/nbd0 00:05:36.069 18:20:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.069 18:20:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.069 18:20:21 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:36.069 18:20:21 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:36.069 18:20:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:36.069 18:20:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:36.069 18:20:21 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:36.069 18:20:21 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:36.069 18:20:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:36.069 18:20:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:36.069 18:20:21 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.069 1+0 records in 00:05:36.069 1+0 records out 00:05:36.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229911 s, 17.8 MB/s 00:05:36.069 18:20:21 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.069 18:20:21 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:36.069 18:20:21 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.070 18:20:21 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:36.070 18:20:21 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:36.070 18:20:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.070 18:20:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.070 18:20:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:36.329 /dev/nbd1 00:05:36.329 18:20:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:36.329 18:20:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:36.329 18:20:21 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:36.329 18:20:21 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:36.329 18:20:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:36.329 18:20:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:36.329 18:20:21 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:36.329 18:20:21 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:36.329 18:20:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:36.329 18:20:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:36.329 18:20:21 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.329 1+0 records in 00:05:36.329 1+0 records out 00:05:36.329 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204165 s, 20.1 MB/s 00:05:36.329 18:20:21 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.329 18:20:21 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:36.329 18:20:21 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.329 18:20:21 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:36.329 18:20:21 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:36.329 18:20:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.329 18:20:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.329 18:20:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.329 18:20:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.329 18:20:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.588 18:20:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.588 { 00:05:36.588 "nbd_device": "/dev/nbd0", 00:05:36.588 "bdev_name": "Malloc0" 00:05:36.588 }, 00:05:36.588 { 00:05:36.588 "nbd_device": "/dev/nbd1", 00:05:36.588 "bdev_name": "Malloc1" 00:05:36.588 } 00:05:36.588 ]' 00:05:36.588 18:20:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.588 { 00:05:36.588 "nbd_device": "/dev/nbd0", 00:05:36.588 "bdev_name": "Malloc0" 00:05:36.588 }, 00:05:36.588 { 00:05:36.588 "nbd_device": "/dev/nbd1", 00:05:36.588 "bdev_name": "Malloc1" 00:05:36.588 } 00:05:36.588 ]' 00:05:36.588 18:20:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.588 /dev/nbd1' 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.588 /dev/nbd1' 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.588 256+0 records in 00:05:36.588 256+0 records out 00:05:36.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103014 s, 102 MB/s 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.588 256+0 records in 00:05:36.588 256+0 records out 00:05:36.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143944 s, 72.8 MB/s 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.588 256+0 records in 00:05:36.588 256+0 records out 00:05:36.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142517 s, 73.6 MB/s 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.588 18:20:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.847 18:20:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.847 18:20:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.847 18:20:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.848 18:20:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.848 18:20:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.848 18:20:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:36.848 18:20:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.848 18:20:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.848 18:20:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.848 18:20:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:37.106 18:20:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:37.106 18:20:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:37.106 18:20:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:37.106 18:20:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.106 18:20:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.106 18:20:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:37.106 18:20:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.106 18:20:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.106 18:20:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.106 18:20:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.106 18:20:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.365 18:20:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:37.365 18:20:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:37.365 18:20:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.365 18:20:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:37.365 18:20:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:37.365 18:20:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.365 18:20:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:37.365 18:20:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:37.365 18:20:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:37.365 18:20:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:37.365 18:20:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:37.365 18:20:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:37.365 18:20:22 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:37.622 18:20:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:37.622 [2024-07-15 18:20:23.100823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.622 [2024-07-15 18:20:23.167252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.622 [2024-07-15 18:20:23.167253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.879 [2024-07-15 18:20:23.208625] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:37.879 [2024-07-15 18:20:23.208664] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.409 18:20:25 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3740910 /var/tmp/spdk-nbd.sock 00:05:40.409 18:20:25 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3740910 ']' 00:05:40.409 18:20:25 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.409 18:20:25 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.409 18:20:25 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.409 18:20:25 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.409 18:20:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.668 18:20:26 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.668 18:20:26 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:40.668 18:20:26 event.app_repeat -- event/event.sh@39 -- # killprocess 3740910 00:05:40.668 18:20:26 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 3740910 ']' 00:05:40.668 18:20:26 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 3740910 00:05:40.668 18:20:26 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:40.668 18:20:26 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:40.668 18:20:26 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3740910 00:05:40.668 18:20:26 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:40.668 18:20:26 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:40.668 18:20:26 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3740910' 00:05:40.668 killing process with pid 3740910 00:05:40.668 18:20:26 event.app_repeat -- common/autotest_common.sh@967 -- # kill 3740910 00:05:40.668 18:20:26 event.app_repeat -- common/autotest_common.sh@972 -- # wait 3740910 00:05:40.926 spdk_app_start is called in Round 0. 00:05:40.926 Shutdown signal received, stop current app iteration 00:05:40.926 Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 reinitialization... 00:05:40.926 spdk_app_start is called in Round 1. 00:05:40.926 Shutdown signal received, stop current app iteration 00:05:40.926 Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 reinitialization... 00:05:40.926 spdk_app_start is called in Round 2. 00:05:40.926 Shutdown signal received, stop current app iteration 00:05:40.926 Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 reinitialization... 00:05:40.926 spdk_app_start is called in Round 3. 00:05:40.926 Shutdown signal received, stop current app iteration 00:05:40.926 18:20:26 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:40.926 18:20:26 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:40.926 00:05:40.926 real 0m16.349s 00:05:40.926 user 0m35.487s 00:05:40.926 sys 0m2.333s 00:05:40.926 18:20:26 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.926 18:20:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.926 ************************************ 00:05:40.926 END TEST app_repeat 00:05:40.926 ************************************ 00:05:40.926 18:20:26 event -- common/autotest_common.sh@1142 -- # return 0 00:05:40.926 18:20:26 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:40.926 18:20:26 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:40.926 18:20:26 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.926 18:20:26 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.926 18:20:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.926 ************************************ 00:05:40.926 START TEST cpu_locks 00:05:40.926 ************************************ 00:05:40.926 18:20:26 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:41.185 * Looking for test storage... 00:05:41.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:41.185 18:20:26 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:41.185 18:20:26 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:41.185 18:20:26 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:41.185 18:20:26 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:41.185 18:20:26 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.185 18:20:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.185 18:20:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.185 ************************************ 00:05:41.185 START TEST default_locks 00:05:41.185 ************************************ 00:05:41.185 18:20:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:41.185 18:20:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3743896 00:05:41.185 18:20:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3743896 00:05:41.185 18:20:26 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3743896 ']' 00:05:41.185 18:20:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:41.185 18:20:26 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.185 18:20:26 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.185 18:20:26 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.185 18:20:26 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.185 18:20:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.185 [2024-07-15 18:20:26.576933] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:41.185 [2024-07-15 18:20:26.576974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3743896 ] 00:05:41.185 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.185 [2024-07-15 18:20:26.640277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.185 [2024-07-15 18:20:26.718923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.119 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.119 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:42.119 18:20:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3743896 00:05:42.120 18:20:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3743896 00:05:42.120 18:20:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.120 lslocks: write error 00:05:42.120 18:20:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3743896 00:05:42.120 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 3743896 ']' 00:05:42.120 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 3743896 00:05:42.120 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:42.120 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.120 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3743896 00:05:42.120 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:42.120 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:42.120 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3743896' 00:05:42.120 killing process with pid 3743896 00:05:42.120 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 3743896 00:05:42.120 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 3743896 00:05:42.378 18:20:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3743896 00:05:42.378 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:42.378 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3743896 00:05:42.378 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:42.378 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:42.378 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:42.378 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:42.379 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3743896 00:05:42.379 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3743896 ']' 00:05:42.379 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.379 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.379 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.379 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.379 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.379 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3743896) - No such process 00:05:42.379 ERROR: process (pid: 3743896) is no longer running 00:05:42.379 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.379 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:42.379 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:42.379 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:42.379 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:42.379 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:42.379 18:20:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:42.379 18:20:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:42.379 18:20:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:42.379 18:20:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:42.379 00:05:42.379 real 0m1.406s 00:05:42.379 user 0m1.456s 00:05:42.379 sys 0m0.460s 00:05:42.379 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.379 18:20:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.379 ************************************ 00:05:42.379 END TEST default_locks 00:05:42.379 ************************************ 00:05:42.660 18:20:27 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:42.660 18:20:27 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:42.660 18:20:27 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.660 18:20:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.660 18:20:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.660 ************************************ 00:05:42.660 START TEST default_locks_via_rpc 00:05:42.660 ************************************ 00:05:42.660 18:20:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:42.660 18:20:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3744160 00:05:42.660 18:20:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3744160 00:05:42.660 18:20:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:42.660 18:20:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3744160 ']' 00:05:42.660 18:20:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.660 18:20:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.660 18:20:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.660 18:20:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.660 18:20:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.660 [2024-07-15 18:20:28.051634] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:42.660 [2024-07-15 18:20:28.051674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3744160 ] 00:05:42.660 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.660 [2024-07-15 18:20:28.115856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.660 [2024-07-15 18:20:28.193911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.596 18:20:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.596 18:20:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:43.596 18:20:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:43.596 18:20:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.596 18:20:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.596 18:20:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.596 18:20:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:43.596 18:20:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:43.596 18:20:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:43.596 18:20:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:43.596 18:20:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:43.596 18:20:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.596 18:20:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.596 18:20:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.596 18:20:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3744160 00:05:43.596 18:20:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3744160 00:05:43.596 18:20:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:43.596 18:20:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3744160 00:05:43.596 18:20:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 3744160 ']' 00:05:43.596 18:20:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 3744160 00:05:43.596 18:20:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:43.596 18:20:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.596 18:20:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3744160 00:05:43.596 18:20:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:43.596 18:20:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:43.596 18:20:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3744160' 00:05:43.596 killing process with pid 3744160 00:05:43.596 18:20:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 3744160 00:05:43.596 18:20:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 3744160 00:05:43.855 00:05:43.855 real 0m1.402s 00:05:43.855 user 0m1.464s 00:05:43.855 sys 0m0.447s 00:05:43.855 18:20:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.855 18:20:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.855 ************************************ 00:05:43.855 END TEST default_locks_via_rpc 00:05:43.855 ************************************ 00:05:44.113 18:20:29 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:44.113 18:20:29 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:44.113 18:20:29 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.113 18:20:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.113 18:20:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.113 ************************************ 00:05:44.113 START TEST non_locking_app_on_locked_coremask 00:05:44.113 ************************************ 00:05:44.113 18:20:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:44.113 18:20:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3744418 00:05:44.113 18:20:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3744418 /var/tmp/spdk.sock 00:05:44.113 18:20:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.113 18:20:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3744418 ']' 00:05:44.113 18:20:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.113 18:20:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.113 18:20:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.113 18:20:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.113 18:20:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.113 [2024-07-15 18:20:29.516887] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:44.113 [2024-07-15 18:20:29.516928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3744418 ] 00:05:44.113 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.113 [2024-07-15 18:20:29.581081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.113 [2024-07-15 18:20:29.658920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.048 18:20:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.048 18:20:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:45.048 18:20:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3744648 00:05:45.048 18:20:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3744648 /var/tmp/spdk2.sock 00:05:45.048 18:20:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:45.048 18:20:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3744648 ']' 00:05:45.048 18:20:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:45.048 18:20:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.049 18:20:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:45.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:45.049 18:20:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.049 18:20:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.049 [2024-07-15 18:20:30.355453] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:45.049 [2024-07-15 18:20:30.355501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3744648 ] 00:05:45.049 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.049 [2024-07-15 18:20:30.425186] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:45.049 [2024-07-15 18:20:30.425209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.049 [2024-07-15 18:20:30.569511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.615 18:20:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.615 18:20:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:45.615 18:20:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3744418 00:05:45.615 18:20:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3744418 00:05:45.615 18:20:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.873 lslocks: write error 00:05:45.873 18:20:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3744418 00:05:45.873 18:20:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3744418 ']' 00:05:45.873 18:20:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3744418 00:05:45.873 18:20:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:45.873 18:20:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:45.873 18:20:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3744418 00:05:46.132 18:20:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:46.132 18:20:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:46.132 18:20:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3744418' 00:05:46.132 killing process with pid 3744418 00:05:46.132 18:20:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3744418 00:05:46.132 18:20:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3744418 00:05:46.700 18:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3744648 00:05:46.700 18:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3744648 ']' 00:05:46.700 18:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3744648 00:05:46.700 18:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:46.700 18:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.700 18:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3744648 00:05:46.700 18:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:46.700 18:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:46.700 18:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3744648' 00:05:46.700 killing process with pid 3744648 00:05:46.700 18:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3744648 00:05:46.700 18:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3744648 00:05:46.959 00:05:46.959 real 0m2.932s 00:05:46.959 user 0m3.132s 00:05:46.959 sys 0m0.805s 00:05:46.959 18:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.959 18:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.959 ************************************ 00:05:46.959 END TEST non_locking_app_on_locked_coremask 00:05:46.959 ************************************ 00:05:46.959 18:20:32 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:46.959 18:20:32 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:46.959 18:20:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.959 18:20:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.959 18:20:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.959 ************************************ 00:05:46.959 START TEST locking_app_on_unlocked_coremask 00:05:46.959 ************************************ 00:05:46.959 18:20:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:46.959 18:20:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3744925 00:05:46.959 18:20:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3744925 /var/tmp/spdk.sock 00:05:46.959 18:20:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:46.959 18:20:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3744925 ']' 00:05:46.959 18:20:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.959 18:20:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.959 18:20:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.959 18:20:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.959 18:20:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.218 [2024-07-15 18:20:32.520318] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:47.219 [2024-07-15 18:20:32.520367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3744925 ] 00:05:47.219 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.219 [2024-07-15 18:20:32.583809] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:47.219 [2024-07-15 18:20:32.583834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.219 [2024-07-15 18:20:32.651941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.786 18:20:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.786 18:20:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:47.786 18:20:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3745154 00:05:47.786 18:20:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3745154 /var/tmp/spdk2.sock 00:05:47.786 18:20:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:47.786 18:20:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3745154 ']' 00:05:47.786 18:20:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.786 18:20:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.786 18:20:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.786 18:20:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.786 18:20:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.045 [2024-07-15 18:20:33.367196] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:48.045 [2024-07-15 18:20:33.367240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3745154 ] 00:05:48.045 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.045 [2024-07-15 18:20:33.442257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.045 [2024-07-15 18:20:33.588505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.613 18:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.613 18:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:48.613 18:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3745154 00:05:48.613 18:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3745154 00:05:48.613 18:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.181 lslocks: write error 00:05:49.181 18:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3744925 00:05:49.181 18:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3744925 ']' 00:05:49.181 18:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3744925 00:05:49.181 18:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:49.181 18:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:49.181 18:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3744925 00:05:49.181 18:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:49.181 18:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:49.181 18:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3744925' 00:05:49.181 killing process with pid 3744925 00:05:49.181 18:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3744925 00:05:49.181 18:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3744925 00:05:49.749 18:20:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3745154 00:05:49.749 18:20:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3745154 ']' 00:05:49.749 18:20:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3745154 00:05:49.749 18:20:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:49.749 18:20:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:49.749 18:20:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3745154 00:05:49.749 18:20:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:49.749 18:20:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:49.749 18:20:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3745154' 00:05:49.749 killing process with pid 3745154 00:05:49.749 18:20:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3745154 00:05:49.749 18:20:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3745154 00:05:50.008 00:05:50.008 real 0m3.021s 00:05:50.008 user 0m3.212s 00:05:50.008 sys 0m0.860s 00:05:50.008 18:20:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.008 18:20:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.008 ************************************ 00:05:50.008 END TEST locking_app_on_unlocked_coremask 00:05:50.008 ************************************ 00:05:50.008 18:20:35 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:50.008 18:20:35 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:50.008 18:20:35 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.008 18:20:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.008 18:20:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.008 ************************************ 00:05:50.008 START TEST locking_app_on_locked_coremask 00:05:50.008 ************************************ 00:05:50.008 18:20:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:50.008 18:20:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3745528 00:05:50.008 18:20:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3745528 /var/tmp/spdk.sock 00:05:50.008 18:20:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.008 18:20:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3745528 ']' 00:05:50.008 18:20:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.008 18:20:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.008 18:20:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.008 18:20:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.008 18:20:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.267 [2024-07-15 18:20:35.610869] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:50.267 [2024-07-15 18:20:35.610914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3745528 ] 00:05:50.267 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.267 [2024-07-15 18:20:35.674421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.267 [2024-07-15 18:20:35.754732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.249 18:20:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.249 18:20:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:51.249 18:20:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3745657 00:05:51.249 18:20:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3745657 /var/tmp/spdk2.sock 00:05:51.249 18:20:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:51.249 18:20:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:51.249 18:20:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3745657 /var/tmp/spdk2.sock 00:05:51.249 18:20:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:51.249 18:20:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.249 18:20:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:51.249 18:20:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.249 18:20:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3745657 /var/tmp/spdk2.sock 00:05:51.249 18:20:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3745657 ']' 00:05:51.249 18:20:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.249 18:20:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.249 18:20:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.249 18:20:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.249 18:20:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.249 [2024-07-15 18:20:36.446448] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:51.249 [2024-07-15 18:20:36.446493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3745657 ] 00:05:51.249 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.249 [2024-07-15 18:20:36.520834] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3745528 has claimed it. 00:05:51.249 [2024-07-15 18:20:36.520867] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:51.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3745657) - No such process 00:05:51.514 ERROR: process (pid: 3745657) is no longer running 00:05:51.514 18:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.514 18:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:51.514 18:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:51.514 18:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:51.514 18:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:51.514 18:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:51.514 18:20:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3745528 00:05:51.514 18:20:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3745528 00:05:51.514 18:20:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.081 lslocks: write error 00:05:52.081 18:20:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3745528 00:05:52.081 18:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3745528 ']' 00:05:52.081 18:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3745528 00:05:52.081 18:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:52.081 18:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:52.081 18:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3745528 00:05:52.081 18:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:52.081 18:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:52.081 18:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3745528' 00:05:52.081 killing process with pid 3745528 00:05:52.081 18:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3745528 00:05:52.081 18:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3745528 00:05:52.339 00:05:52.339 real 0m2.225s 00:05:52.339 user 0m2.437s 00:05:52.339 sys 0m0.612s 00:05:52.339 18:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.339 18:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.339 ************************************ 00:05:52.339 END TEST locking_app_on_locked_coremask 00:05:52.339 ************************************ 00:05:52.339 18:20:37 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:52.339 18:20:37 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:52.340 18:20:37 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.340 18:20:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.340 18:20:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.340 ************************************ 00:05:52.340 START TEST locking_overlapped_coremask 00:05:52.340 ************************************ 00:05:52.340 18:20:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:52.340 18:20:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3745917 00:05:52.340 18:20:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3745917 /var/tmp/spdk.sock 00:05:52.340 18:20:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:52.340 18:20:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3745917 ']' 00:05:52.340 18:20:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.340 18:20:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.340 18:20:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.340 18:20:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.340 18:20:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.598 [2024-07-15 18:20:37.905220] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:52.598 [2024-07-15 18:20:37.905264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3745917 ] 00:05:52.598 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.598 [2024-07-15 18:20:37.970190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.598 [2024-07-15 18:20:38.039397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.598 [2024-07-15 18:20:38.039436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.598 [2024-07-15 18:20:38.039437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.164 18:20:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.164 18:20:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:53.164 18:20:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:53.164 18:20:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3746166 00:05:53.164 18:20:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3746166 /var/tmp/spdk2.sock 00:05:53.164 18:20:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:53.164 18:20:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3746166 /var/tmp/spdk2.sock 00:05:53.164 18:20:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:53.164 18:20:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.164 18:20:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:53.164 18:20:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.164 18:20:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3746166 /var/tmp/spdk2.sock 00:05:53.164 18:20:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3746166 ']' 00:05:53.164 18:20:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.164 18:20:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.164 18:20:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.164 18:20:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.164 18:20:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.422 [2024-07-15 18:20:38.759644] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:53.422 [2024-07-15 18:20:38.759686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3746166 ] 00:05:53.422 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.422 [2024-07-15 18:20:38.834681] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3745917 has claimed it. 00:05:53.422 [2024-07-15 18:20:38.834711] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:53.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3746166) - No such process 00:05:53.989 ERROR: process (pid: 3746166) is no longer running 00:05:53.989 18:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.989 18:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:53.989 18:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:53.989 18:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:53.989 18:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:53.989 18:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:53.989 18:20:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:53.989 18:20:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:53.989 18:20:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:53.989 18:20:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:53.989 18:20:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3745917 00:05:53.989 18:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 3745917 ']' 00:05:53.989 18:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 3745917 00:05:53.989 18:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:53.989 18:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:53.989 18:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3745917 00:05:53.989 18:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:53.989 18:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:53.989 18:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3745917' 00:05:53.989 killing process with pid 3745917 00:05:53.989 18:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 3745917 00:05:53.989 18:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 3745917 00:05:54.248 00:05:54.248 real 0m1.874s 00:05:54.248 user 0m5.278s 00:05:54.248 sys 0m0.409s 00:05:54.248 18:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.248 18:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.248 ************************************ 00:05:54.248 END TEST locking_overlapped_coremask 00:05:54.248 ************************************ 00:05:54.248 18:20:39 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:54.248 18:20:39 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:54.248 18:20:39 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.248 18:20:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.248 18:20:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.248 ************************************ 00:05:54.248 START TEST locking_overlapped_coremask_via_rpc 00:05:54.248 ************************************ 00:05:54.248 18:20:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:54.248 18:20:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3746366 00:05:54.248 18:20:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3746366 /var/tmp/spdk.sock 00:05:54.248 18:20:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:54.248 18:20:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3746366 ']' 00:05:54.248 18:20:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.248 18:20:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.248 18:20:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.248 18:20:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.248 18:20:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.507 [2024-07-15 18:20:39.845715] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:54.507 [2024-07-15 18:20:39.845755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3746366 ] 00:05:54.507 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.507 [2024-07-15 18:20:39.909203] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:54.507 [2024-07-15 18:20:39.909227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:54.507 [2024-07-15 18:20:39.988722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.507 [2024-07-15 18:20:39.988755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.507 [2024-07-15 18:20:39.988755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.441 18:20:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.441 18:20:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:55.441 18:20:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3746440 00:05:55.441 18:20:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3746440 /var/tmp/spdk2.sock 00:05:55.441 18:20:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:55.441 18:20:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3746440 ']' 00:05:55.441 18:20:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.441 18:20:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.441 18:20:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.441 18:20:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.441 18:20:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.441 [2024-07-15 18:20:40.698066] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:55.441 [2024-07-15 18:20:40.698115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3746440 ] 00:05:55.441 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.441 [2024-07-15 18:20:40.772899] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:55.441 [2024-07-15 18:20:40.772923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:55.441 [2024-07-15 18:20:40.918305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:55.441 [2024-07-15 18:20:40.921398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.441 [2024-07-15 18:20:40.921399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:56.008 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.008 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:56.008 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:56.008 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.008 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.008 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.008 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:56.008 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:56.008 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:56.008 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:56.008 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.008 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:56.008 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.008 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:56.008 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.008 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.008 [2024-07-15 18:20:41.511406] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3746366 has claimed it. 00:05:56.008 request: 00:05:56.008 { 00:05:56.008 "method": "framework_enable_cpumask_locks", 00:05:56.008 "req_id": 1 00:05:56.008 } 00:05:56.008 Got JSON-RPC error response 00:05:56.008 response: 00:05:56.008 { 00:05:56.008 "code": -32603, 00:05:56.008 "message": "Failed to claim CPU core: 2" 00:05:56.008 } 00:05:56.008 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:56.008 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:56.008 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:56.008 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:56.008 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:56.008 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3746366 /var/tmp/spdk.sock 00:05:56.008 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3746366 ']' 00:05:56.008 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.009 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.009 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.009 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.009 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.267 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.267 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:56.267 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3746440 /var/tmp/spdk2.sock 00:05:56.267 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3746440 ']' 00:05:56.267 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.267 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.267 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.267 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.267 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.525 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.525 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:56.525 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:56.525 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:56.526 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:56.526 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:56.526 00:05:56.526 real 0m2.092s 00:05:56.526 user 0m0.868s 00:05:56.526 sys 0m0.153s 00:05:56.526 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.526 18:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.526 ************************************ 00:05:56.526 END TEST locking_overlapped_coremask_via_rpc 00:05:56.526 ************************************ 00:05:56.526 18:20:41 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:56.526 18:20:41 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:56.526 18:20:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3746366 ]] 00:05:56.526 18:20:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3746366 00:05:56.526 18:20:41 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3746366 ']' 00:05:56.526 18:20:41 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3746366 00:05:56.526 18:20:41 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:56.526 18:20:41 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.526 18:20:41 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3746366 00:05:56.526 18:20:41 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:56.526 18:20:41 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:56.526 18:20:41 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3746366' 00:05:56.526 killing process with pid 3746366 00:05:56.526 18:20:41 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3746366 00:05:56.526 18:20:41 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3746366 00:05:56.784 18:20:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3746440 ]] 00:05:56.784 18:20:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3746440 00:05:56.784 18:20:42 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3746440 ']' 00:05:56.784 18:20:42 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3746440 00:05:56.784 18:20:42 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:56.784 18:20:42 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.784 18:20:42 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3746440 00:05:56.784 18:20:42 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:56.784 18:20:42 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:56.784 18:20:42 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3746440' 00:05:56.784 killing process with pid 3746440 00:05:56.784 18:20:42 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3746440 00:05:56.784 18:20:42 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3746440 00:05:57.351 18:20:42 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:57.351 18:20:42 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:57.351 18:20:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3746366 ]] 00:05:57.351 18:20:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3746366 00:05:57.351 18:20:42 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3746366 ']' 00:05:57.351 18:20:42 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3746366 00:05:57.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3746366) - No such process 00:05:57.351 18:20:42 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3746366 is not found' 00:05:57.351 Process with pid 3746366 is not found 00:05:57.351 18:20:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3746440 ]] 00:05:57.351 18:20:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3746440 00:05:57.351 18:20:42 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3746440 ']' 00:05:57.351 18:20:42 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3746440 00:05:57.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3746440) - No such process 00:05:57.351 18:20:42 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3746440 is not found' 00:05:57.351 Process with pid 3746440 is not found 00:05:57.351 18:20:42 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:57.351 00:05:57.351 real 0m16.244s 00:05:57.351 user 0m28.276s 00:05:57.351 sys 0m4.675s 00:05:57.351 18:20:42 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.351 18:20:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.351 ************************************ 00:05:57.351 END TEST cpu_locks 00:05:57.351 ************************************ 00:05:57.351 18:20:42 event -- common/autotest_common.sh@1142 -- # return 0 00:05:57.351 00:05:57.351 real 0m41.514s 00:05:57.351 user 1m19.578s 00:05:57.351 sys 0m7.966s 00:05:57.351 18:20:42 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.351 18:20:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.351 ************************************ 00:05:57.351 END TEST event 00:05:57.351 ************************************ 00:05:57.351 18:20:42 -- common/autotest_common.sh@1142 -- # return 0 00:05:57.351 18:20:42 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:57.351 18:20:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.351 18:20:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.351 18:20:42 -- common/autotest_common.sh@10 -- # set +x 00:05:57.351 ************************************ 00:05:57.351 START TEST thread 00:05:57.351 ************************************ 00:05:57.351 18:20:42 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:57.351 * Looking for test storage... 00:05:57.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:57.351 18:20:42 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:57.351 18:20:42 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:57.351 18:20:42 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.351 18:20:42 thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.351 ************************************ 00:05:57.351 START TEST thread_poller_perf 00:05:57.351 ************************************ 00:05:57.351 18:20:42 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:57.351 [2024-07-15 18:20:42.892431] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:57.351 [2024-07-15 18:20:42.892491] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3746989 ] 00:05:57.612 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.612 [2024-07-15 18:20:42.961212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.612 [2024-07-15 18:20:43.033222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.612 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:58.547 ====================================== 00:05:58.547 busy:2105808592 (cyc) 00:05:58.547 total_run_count: 424000 00:05:58.547 tsc_hz: 2100000000 (cyc) 00:05:58.547 ====================================== 00:05:58.547 poller_cost: 4966 (cyc), 2364 (nsec) 00:05:58.806 00:05:58.806 real 0m1.228s 00:05:58.806 user 0m1.144s 00:05:58.807 sys 0m0.079s 00:05:58.807 18:20:44 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.807 18:20:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:58.807 ************************************ 00:05:58.807 END TEST thread_poller_perf 00:05:58.807 ************************************ 00:05:58.807 18:20:44 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:58.807 18:20:44 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:58.807 18:20:44 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:58.807 18:20:44 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.807 18:20:44 thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.807 ************************************ 00:05:58.807 START TEST thread_poller_perf 00:05:58.807 ************************************ 00:05:58.807 18:20:44 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:58.807 [2024-07-15 18:20:44.196011] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:58.807 [2024-07-15 18:20:44.196079] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3747244 ] 00:05:58.807 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.807 [2024-07-15 18:20:44.266778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.807 [2024-07-15 18:20:44.337577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.807 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:00.183 ====================================== 00:06:00.183 busy:2101415248 (cyc) 00:06:00.183 total_run_count: 5630000 00:06:00.183 tsc_hz: 2100000000 (cyc) 00:06:00.183 ====================================== 00:06:00.183 poller_cost: 373 (cyc), 177 (nsec) 00:06:00.183 00:06:00.183 real 0m1.233s 00:06:00.183 user 0m1.142s 00:06:00.183 sys 0m0.087s 00:06:00.183 18:20:45 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.183 18:20:45 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:00.183 ************************************ 00:06:00.183 END TEST thread_poller_perf 00:06:00.183 ************************************ 00:06:00.183 18:20:45 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:00.183 18:20:45 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:00.183 00:06:00.183 real 0m2.685s 00:06:00.183 user 0m2.383s 00:06:00.183 sys 0m0.310s 00:06:00.183 18:20:45 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.183 18:20:45 thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.183 ************************************ 00:06:00.183 END TEST thread 00:06:00.183 ************************************ 00:06:00.183 18:20:45 -- common/autotest_common.sh@1142 -- # return 0 00:06:00.183 18:20:45 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:00.183 18:20:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.183 18:20:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.183 18:20:45 -- common/autotest_common.sh@10 -- # set +x 00:06:00.183 ************************************ 00:06:00.183 START TEST accel 00:06:00.183 ************************************ 00:06:00.183 18:20:45 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:00.183 * Looking for test storage... 00:06:00.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:00.183 18:20:45 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:00.183 18:20:45 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:00.183 18:20:45 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:00.183 18:20:45 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3747529 00:06:00.183 18:20:45 accel -- accel/accel.sh@63 -- # waitforlisten 3747529 00:06:00.183 18:20:45 accel -- common/autotest_common.sh@829 -- # '[' -z 3747529 ']' 00:06:00.183 18:20:45 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.183 18:20:45 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:00.183 18:20:45 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.183 18:20:45 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:00.183 18:20:45 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.183 18:20:45 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.183 18:20:45 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.183 18:20:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.183 18:20:45 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.183 18:20:45 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.184 18:20:45 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.184 18:20:45 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.184 18:20:45 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:00.184 18:20:45 accel -- accel/accel.sh@41 -- # jq -r . 00:06:00.184 [2024-07-15 18:20:45.652772] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:00.184 [2024-07-15 18:20:45.652824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3747529 ] 00:06:00.184 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.184 [2024-07-15 18:20:45.721897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.442 [2024-07-15 18:20:45.800830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.010 18:20:46 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.010 18:20:46 accel -- common/autotest_common.sh@862 -- # return 0 00:06:01.010 18:20:46 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:01.010 18:20:46 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:01.010 18:20:46 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:01.010 18:20:46 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:01.010 18:20:46 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:01.010 18:20:46 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:01.010 18:20:46 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.010 18:20:46 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:01.010 18:20:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.010 18:20:46 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.010 18:20:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.010 18:20:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:01.010 18:20:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:01.010 18:20:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:01.010 18:20:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.010 18:20:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:01.010 18:20:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:01.010 18:20:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:01.010 18:20:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.010 18:20:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:01.010 18:20:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:01.010 18:20:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:01.010 18:20:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.010 18:20:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:01.010 18:20:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:01.010 18:20:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:01.010 18:20:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.010 18:20:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:01.010 18:20:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:01.010 18:20:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:01.010 18:20:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.010 18:20:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:01.010 18:20:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:01.010 18:20:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:01.010 18:20:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.010 18:20:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:01.010 18:20:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:01.010 18:20:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:01.010 18:20:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.010 18:20:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:01.010 18:20:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:01.010 18:20:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:01.010 18:20:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.010 18:20:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:01.010 18:20:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:01.010 18:20:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:01.010 18:20:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.010 18:20:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:01.010 18:20:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:01.010 18:20:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:01.010 18:20:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.010 18:20:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:01.010 18:20:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:01.010 18:20:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:01.010 18:20:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.010 18:20:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:01.010 18:20:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:01.011 18:20:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:01.011 18:20:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.011 18:20:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:01.011 18:20:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:01.011 18:20:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:01.011 18:20:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.011 18:20:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:01.011 18:20:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:01.011 18:20:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:01.011 18:20:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.011 18:20:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:01.011 18:20:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:01.011 18:20:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:01.011 18:20:46 accel -- accel/accel.sh@75 -- # killprocess 3747529 00:06:01.011 18:20:46 accel -- common/autotest_common.sh@948 -- # '[' -z 3747529 ']' 00:06:01.011 18:20:46 accel -- common/autotest_common.sh@952 -- # kill -0 3747529 00:06:01.011 18:20:46 accel -- common/autotest_common.sh@953 -- # uname 00:06:01.011 18:20:46 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.011 18:20:46 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3747529 00:06:01.011 18:20:46 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:01.011 18:20:46 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:01.011 18:20:46 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3747529' 00:06:01.011 killing process with pid 3747529 00:06:01.011 18:20:46 accel -- common/autotest_common.sh@967 -- # kill 3747529 00:06:01.011 18:20:46 accel -- common/autotest_common.sh@972 -- # wait 3747529 00:06:01.579 18:20:46 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:01.579 18:20:46 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:01.579 18:20:46 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:01.579 18:20:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.579 18:20:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.579 18:20:46 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:01.579 18:20:46 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:01.579 18:20:46 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:01.579 18:20:46 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.579 18:20:46 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.579 18:20:46 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.579 18:20:46 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.579 18:20:46 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.579 18:20:46 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:01.579 18:20:46 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:01.579 18:20:46 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.579 18:20:46 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:01.579 18:20:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:01.579 18:20:46 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:01.579 18:20:46 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:01.579 18:20:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.579 18:20:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.579 ************************************ 00:06:01.579 START TEST accel_missing_filename 00:06:01.579 ************************************ 00:06:01.579 18:20:46 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:01.579 18:20:46 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:01.579 18:20:46 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:01.579 18:20:46 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:01.579 18:20:46 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.579 18:20:46 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:01.579 18:20:46 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.579 18:20:46 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:01.579 18:20:46 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:01.579 18:20:46 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:01.579 18:20:46 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.579 18:20:46 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.579 18:20:46 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.579 18:20:46 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.579 18:20:46 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.579 18:20:46 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:01.579 18:20:46 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:01.579 [2024-07-15 18:20:47.020350] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:01.579 [2024-07-15 18:20:47.020419] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3747802 ] 00:06:01.579 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.579 [2024-07-15 18:20:47.090409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.838 [2024-07-15 18:20:47.167014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.838 [2024-07-15 18:20:47.207941] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:01.838 [2024-07-15 18:20:47.267225] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:01.838 A filename is required. 00:06:01.838 18:20:47 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:01.838 18:20:47 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:01.838 18:20:47 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:01.838 18:20:47 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:01.838 18:20:47 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:01.838 18:20:47 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:01.838 00:06:01.838 real 0m0.348s 00:06:01.838 user 0m0.258s 00:06:01.838 sys 0m0.130s 00:06:01.838 18:20:47 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.838 18:20:47 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:01.838 ************************************ 00:06:01.838 END TEST accel_missing_filename 00:06:01.838 ************************************ 00:06:01.838 18:20:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:01.838 18:20:47 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:01.838 18:20:47 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:01.838 18:20:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.838 18:20:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.096 ************************************ 00:06:02.096 START TEST accel_compress_verify 00:06:02.096 ************************************ 00:06:02.096 18:20:47 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:02.096 18:20:47 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:02.096 18:20:47 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:02.096 18:20:47 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:02.096 18:20:47 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.096 18:20:47 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:02.096 18:20:47 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.096 18:20:47 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:02.096 18:20:47 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:02.096 18:20:47 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:02.096 18:20:47 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.096 18:20:47 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.096 18:20:47 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.096 18:20:47 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.096 18:20:47 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.096 18:20:47 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:02.096 18:20:47 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:02.096 [2024-07-15 18:20:47.437010] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:02.096 [2024-07-15 18:20:47.437080] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3747830 ] 00:06:02.096 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.096 [2024-07-15 18:20:47.506882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.096 [2024-07-15 18:20:47.578706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.096 [2024-07-15 18:20:47.619014] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:02.354 [2024-07-15 18:20:47.678488] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:02.354 00:06:02.354 Compression does not support the verify option, aborting. 00:06:02.354 18:20:47 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:02.354 18:20:47 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:02.354 18:20:47 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:02.354 18:20:47 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:02.354 18:20:47 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:02.354 18:20:47 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:02.354 00:06:02.354 real 0m0.343s 00:06:02.354 user 0m0.252s 00:06:02.354 sys 0m0.131s 00:06:02.354 18:20:47 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.354 18:20:47 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:02.354 ************************************ 00:06:02.354 END TEST accel_compress_verify 00:06:02.354 ************************************ 00:06:02.354 18:20:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:02.354 18:20:47 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:02.354 18:20:47 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:02.354 18:20:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.354 18:20:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.355 ************************************ 00:06:02.355 START TEST accel_wrong_workload 00:06:02.355 ************************************ 00:06:02.355 18:20:47 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:02.355 18:20:47 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:02.355 18:20:47 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:02.355 18:20:47 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:02.355 18:20:47 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.355 18:20:47 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:02.355 18:20:47 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.355 18:20:47 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:02.355 18:20:47 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:02.355 18:20:47 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:02.355 18:20:47 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.355 18:20:47 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.355 18:20:47 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.355 18:20:47 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.355 18:20:47 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.355 18:20:47 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:02.355 18:20:47 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:02.355 Unsupported workload type: foobar 00:06:02.355 [2024-07-15 18:20:47.845018] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:02.355 accel_perf options: 00:06:02.355 [-h help message] 00:06:02.355 [-q queue depth per core] 00:06:02.355 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:02.355 [-T number of threads per core 00:06:02.355 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:02.355 [-t time in seconds] 00:06:02.355 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:02.355 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:02.355 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:02.355 [-l for compress/decompress workloads, name of uncompressed input file 00:06:02.355 [-S for crc32c workload, use this seed value (default 0) 00:06:02.355 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:02.355 [-f for fill workload, use this BYTE value (default 255) 00:06:02.355 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:02.355 [-y verify result if this switch is on] 00:06:02.355 [-a tasks to allocate per core (default: same value as -q)] 00:06:02.355 Can be used to spread operations across a wider range of memory. 00:06:02.355 18:20:47 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:02.355 18:20:47 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:02.355 18:20:47 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:02.355 18:20:47 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:02.355 00:06:02.355 real 0m0.035s 00:06:02.355 user 0m0.023s 00:06:02.355 sys 0m0.012s 00:06:02.355 18:20:47 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.355 18:20:47 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:02.355 ************************************ 00:06:02.355 END TEST accel_wrong_workload 00:06:02.355 ************************************ 00:06:02.355 Error: writing output failed: Broken pipe 00:06:02.355 18:20:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:02.355 18:20:47 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:02.355 18:20:47 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:02.355 18:20:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.355 18:20:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.613 ************************************ 00:06:02.613 START TEST accel_negative_buffers 00:06:02.613 ************************************ 00:06:02.613 18:20:47 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:02.613 18:20:47 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:02.613 18:20:47 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:02.613 18:20:47 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:02.613 18:20:47 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.613 18:20:47 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:02.613 18:20:47 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.613 18:20:47 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:02.613 18:20:47 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:02.613 18:20:47 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:02.613 18:20:47 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.613 18:20:47 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.613 18:20:47 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.613 18:20:47 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.613 18:20:47 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.613 18:20:47 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:02.613 18:20:47 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:02.613 -x option must be non-negative. 00:06:02.613 [2024-07-15 18:20:47.944122] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:02.613 accel_perf options: 00:06:02.613 [-h help message] 00:06:02.613 [-q queue depth per core] 00:06:02.613 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:02.613 [-T number of threads per core 00:06:02.613 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:02.613 [-t time in seconds] 00:06:02.613 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:02.613 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:02.613 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:02.613 [-l for compress/decompress workloads, name of uncompressed input file 00:06:02.613 [-S for crc32c workload, use this seed value (default 0) 00:06:02.613 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:02.613 [-f for fill workload, use this BYTE value (default 255) 00:06:02.613 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:02.613 [-y verify result if this switch is on] 00:06:02.613 [-a tasks to allocate per core (default: same value as -q)] 00:06:02.613 Can be used to spread operations across a wider range of memory. 00:06:02.613 18:20:47 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:02.613 18:20:47 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:02.613 18:20:47 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:02.613 18:20:47 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:02.613 00:06:02.613 real 0m0.032s 00:06:02.613 user 0m0.022s 00:06:02.613 sys 0m0.011s 00:06:02.613 18:20:47 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.613 18:20:47 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:02.613 ************************************ 00:06:02.613 END TEST accel_negative_buffers 00:06:02.613 ************************************ 00:06:02.613 Error: writing output failed: Broken pipe 00:06:02.613 18:20:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:02.613 18:20:47 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:02.613 18:20:47 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:02.613 18:20:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.613 18:20:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.613 ************************************ 00:06:02.613 START TEST accel_crc32c 00:06:02.613 ************************************ 00:06:02.613 18:20:48 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:02.613 18:20:48 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:02.613 18:20:48 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:02.613 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.613 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.613 18:20:48 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:02.613 18:20:48 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:02.613 18:20:48 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:02.613 18:20:48 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.613 18:20:48 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.613 18:20:48 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.613 18:20:48 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.613 18:20:48 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.613 18:20:48 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:02.613 18:20:48 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:02.613 [2024-07-15 18:20:48.033779] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:02.613 [2024-07-15 18:20:48.033849] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3748005 ] 00:06:02.613 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.613 [2024-07-15 18:20:48.101721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.871 [2024-07-15 18:20:48.176850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.871 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.872 18:20:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:02.872 18:20:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.872 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.872 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.872 18:20:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:02.872 18:20:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.872 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.872 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.872 18:20:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.872 18:20:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.872 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.872 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.872 18:20:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:02.872 18:20:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.872 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.872 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.872 18:20:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.872 18:20:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.872 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.872 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.872 18:20:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.872 18:20:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.872 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.872 18:20:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.803 18:20:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:03.803 18:20:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.803 18:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.803 18:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.803 18:20:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:03.803 18:20:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.803 18:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.803 18:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.803 18:20:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:03.803 18:20:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.803 18:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.803 18:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.803 18:20:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:03.803 18:20:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.803 18:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.803 18:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.803 18:20:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:03.803 18:20:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.803 18:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.803 18:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.803 18:20:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:03.803 18:20:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.803 18:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.803 18:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.803 18:20:49 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.803 18:20:49 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:03.803 18:20:49 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.803 00:06:03.803 real 0m1.350s 00:06:03.803 user 0m1.234s 00:06:03.803 sys 0m0.130s 00:06:03.803 18:20:49 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.803 18:20:49 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:03.803 ************************************ 00:06:03.804 END TEST accel_crc32c 00:06:03.804 ************************************ 00:06:04.061 18:20:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:04.061 18:20:49 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:04.061 18:20:49 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:04.061 18:20:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.061 18:20:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.061 ************************************ 00:06:04.061 START TEST accel_crc32c_C2 00:06:04.061 ************************************ 00:06:04.061 18:20:49 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:04.061 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:04.061 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:04.061 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.061 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.061 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:04.061 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:04.061 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.061 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.061 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.061 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.061 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.061 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.061 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:04.061 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:04.061 [2024-07-15 18:20:49.451102] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:04.061 [2024-07-15 18:20:49.451151] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3748275 ] 00:06:04.061 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.061 [2024-07-15 18:20:49.517220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.061 [2024-07-15 18:20:49.592894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.319 18:20:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.252 18:20:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.253 18:20:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.253 18:20:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.253 18:20:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.253 18:20:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.253 18:20:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.253 18:20:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.253 18:20:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.253 18:20:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.253 18:20:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.253 18:20:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.253 18:20:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.253 18:20:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.253 18:20:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.253 18:20:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.253 18:20:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.253 18:20:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.253 18:20:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.253 18:20:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.253 18:20:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.253 18:20:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.253 18:20:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.253 18:20:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.253 18:20:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.253 18:20:50 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.253 18:20:50 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:05.253 18:20:50 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.253 00:06:05.253 real 0m1.350s 00:06:05.253 user 0m1.242s 00:06:05.253 sys 0m0.121s 00:06:05.253 18:20:50 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.253 18:20:50 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:05.253 ************************************ 00:06:05.253 END TEST accel_crc32c_C2 00:06:05.253 ************************************ 00:06:05.253 18:20:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:05.253 18:20:50 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:05.253 18:20:50 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:05.253 18:20:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.512 18:20:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.512 ************************************ 00:06:05.512 START TEST accel_copy 00:06:05.512 ************************************ 00:06:05.512 18:20:50 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:05.512 18:20:50 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:05.512 18:20:50 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:05.512 18:20:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.512 18:20:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.512 18:20:50 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:05.512 18:20:50 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:05.512 18:20:50 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:05.512 18:20:50 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.512 18:20:50 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.512 18:20:50 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.512 18:20:50 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.512 18:20:50 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.512 18:20:50 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:05.512 18:20:50 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:05.512 [2024-07-15 18:20:50.866821] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:05.512 [2024-07-15 18:20:50.866867] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3748546 ] 00:06:05.512 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.512 [2024-07-15 18:20:50.933748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.512 [2024-07-15 18:20:51.004858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.512 18:20:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.886 18:20:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:06.886 18:20:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.886 18:20:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.886 18:20:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.886 18:20:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:06.886 18:20:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.886 18:20:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.886 18:20:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.886 18:20:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:06.886 18:20:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.886 18:20:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.886 18:20:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.886 18:20:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:06.886 18:20:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.886 18:20:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.886 18:20:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.886 18:20:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:06.886 18:20:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.886 18:20:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.886 18:20:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.886 18:20:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:06.886 18:20:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.886 18:20:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.886 18:20:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.886 18:20:52 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.886 18:20:52 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:06.886 18:20:52 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.886 00:06:06.886 real 0m1.344s 00:06:06.886 user 0m1.234s 00:06:06.886 sys 0m0.123s 00:06:06.886 18:20:52 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.886 18:20:52 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:06.886 ************************************ 00:06:06.886 END TEST accel_copy 00:06:06.886 ************************************ 00:06:06.886 18:20:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:06.886 18:20:52 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:06.886 18:20:52 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:06.886 18:20:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.886 18:20:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.886 ************************************ 00:06:06.886 START TEST accel_fill 00:06:06.886 ************************************ 00:06:06.886 18:20:52 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:06.886 18:20:52 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:06.886 18:20:52 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:06.886 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:06.886 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:06.886 18:20:52 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:06.886 18:20:52 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:06.886 18:20:52 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:06.886 18:20:52 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.886 18:20:52 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.886 18:20:52 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.886 18:20:52 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.886 18:20:52 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.886 18:20:52 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:06.886 18:20:52 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:06.886 [2024-07-15 18:20:52.276448] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:06.886 [2024-07-15 18:20:52.276503] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3748821 ] 00:06:06.886 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.886 [2024-07-15 18:20:52.344746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.886 [2024-07-15 18:20:52.416065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.144 18:20:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:07.144 18:20:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.144 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.144 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.144 18:20:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:07.144 18:20:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.145 18:20:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:08.079 18:20:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:08.079 18:20:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:08.079 18:20:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:08.079 18:20:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:08.079 18:20:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:08.079 18:20:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:08.079 18:20:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:08.079 18:20:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:08.079 18:20:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:08.079 18:20:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:08.079 18:20:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:08.079 18:20:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:08.079 18:20:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:08.079 18:20:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:08.079 18:20:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:08.079 18:20:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:08.079 18:20:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:08.079 18:20:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:08.079 18:20:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:08.079 18:20:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:08.079 18:20:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:08.079 18:20:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:08.079 18:20:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:08.079 18:20:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:08.079 18:20:53 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.079 18:20:53 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:08.079 18:20:53 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.079 00:06:08.079 real 0m1.347s 00:06:08.079 user 0m1.226s 00:06:08.079 sys 0m0.134s 00:06:08.079 18:20:53 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.079 18:20:53 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:08.079 ************************************ 00:06:08.079 END TEST accel_fill 00:06:08.079 ************************************ 00:06:08.079 18:20:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.079 18:20:53 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:08.079 18:20:53 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:08.079 18:20:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.079 18:20:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.338 ************************************ 00:06:08.338 START TEST accel_copy_crc32c 00:06:08.338 ************************************ 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:08.338 [2024-07-15 18:20:53.690192] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:08.338 [2024-07-15 18:20:53.690239] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3749089 ] 00:06:08.338 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.338 [2024-07-15 18:20:53.756987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.338 [2024-07-15 18:20:53.827811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.338 18:20:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.712 18:20:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.712 18:20:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.712 18:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.712 18:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.712 18:20:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.712 18:20:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.712 18:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.712 18:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.712 18:20:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.712 18:20:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.712 18:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.712 18:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.712 18:20:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.712 18:20:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.712 18:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.712 18:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.712 18:20:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.712 18:20:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.712 18:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.712 18:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.712 18:20:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.712 18:20:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.712 18:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.712 18:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.712 18:20:55 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.712 18:20:55 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:09.712 18:20:55 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.712 00:06:09.712 real 0m1.343s 00:06:09.712 user 0m1.238s 00:06:09.712 sys 0m0.119s 00:06:09.712 18:20:55 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.712 18:20:55 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:09.712 ************************************ 00:06:09.712 END TEST accel_copy_crc32c 00:06:09.712 ************************************ 00:06:09.712 18:20:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:09.712 18:20:55 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:09.712 18:20:55 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:09.712 18:20:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.712 18:20:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.712 ************************************ 00:06:09.712 START TEST accel_copy_crc32c_C2 00:06:09.712 ************************************ 00:06:09.712 18:20:55 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:09.712 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:09.712 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:09.712 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.712 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.712 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:09.712 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:09.712 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.712 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.712 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.712 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.712 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.712 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.712 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:09.712 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:09.712 [2024-07-15 18:20:55.100857] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:09.712 [2024-07-15 18:20:55.100920] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3749349 ] 00:06:09.712 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.712 [2024-07-15 18:20:55.170768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.713 [2024-07-15 18:20:55.241704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:09.971 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.972 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.972 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.972 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:09.972 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.972 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.972 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.972 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:09.972 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.972 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.972 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.972 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.972 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.972 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.972 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.972 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:09.972 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.972 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.972 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.972 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.972 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.972 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.972 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.972 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.972 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.972 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.972 18:20:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.907 18:20:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.907 18:20:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.907 18:20:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.907 18:20:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.907 18:20:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.907 18:20:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.907 18:20:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.907 18:20:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.907 18:20:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.907 18:20:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.907 18:20:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.907 18:20:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.907 18:20:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.907 18:20:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.907 18:20:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.907 18:20:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.907 18:20:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.907 18:20:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.907 18:20:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.907 18:20:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.907 18:20:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.907 18:20:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.907 18:20:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.907 18:20:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.907 18:20:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.907 18:20:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:10.907 18:20:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.907 00:06:10.907 real 0m1.349s 00:06:10.907 user 0m1.238s 00:06:10.907 sys 0m0.124s 00:06:10.907 18:20:56 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.907 18:20:56 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:10.907 ************************************ 00:06:10.907 END TEST accel_copy_crc32c_C2 00:06:10.907 ************************************ 00:06:10.907 18:20:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:10.907 18:20:56 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:10.907 18:20:56 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:10.907 18:20:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.907 18:20:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.165 ************************************ 00:06:11.165 START TEST accel_dualcast 00:06:11.165 ************************************ 00:06:11.165 18:20:56 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:11.165 18:20:56 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:11.165 18:20:56 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:11.165 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:11.165 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:11.165 18:20:56 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:11.165 18:20:56 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:11.165 18:20:56 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:11.165 18:20:56 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.165 18:20:56 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.165 18:20:56 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.165 18:20:56 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.165 18:20:56 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.165 18:20:56 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:11.165 18:20:56 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:11.165 [2024-07-15 18:20:56.513056] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:11.165 [2024-07-15 18:20:56.513110] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3749607 ] 00:06:11.165 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.165 [2024-07-15 18:20:56.581098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.165 [2024-07-15 18:20:56.652198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.165 18:20:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:11.165 18:20:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:11.165 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:11.165 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:11.165 18:20:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:11.165 18:20:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:11.165 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:11.165 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:11.165 18:20:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:11.165 18:20:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:11.165 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:11.166 18:20:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.541 18:20:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:12.541 18:20:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.541 18:20:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.541 18:20:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.541 18:20:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:12.541 18:20:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.541 18:20:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.541 18:20:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.541 18:20:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:12.541 18:20:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.541 18:20:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.541 18:20:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.541 18:20:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:12.541 18:20:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.541 18:20:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.541 18:20:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.541 18:20:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:12.541 18:20:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.541 18:20:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.541 18:20:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.541 18:20:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:12.541 18:20:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.541 18:20:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.541 18:20:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.541 18:20:57 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.541 18:20:57 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:12.541 18:20:57 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.541 00:06:12.541 real 0m1.347s 00:06:12.541 user 0m1.236s 00:06:12.541 sys 0m0.124s 00:06:12.541 18:20:57 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.541 18:20:57 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:12.541 ************************************ 00:06:12.541 END TEST accel_dualcast 00:06:12.541 ************************************ 00:06:12.541 18:20:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:12.541 18:20:57 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:12.541 18:20:57 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:12.541 18:20:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.541 18:20:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.541 ************************************ 00:06:12.541 START TEST accel_compare 00:06:12.541 ************************************ 00:06:12.541 18:20:57 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:12.541 18:20:57 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:12.541 18:20:57 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:12.541 18:20:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:12.541 18:20:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:12.541 18:20:57 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:12.541 18:20:57 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:12.541 18:20:57 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:12.541 18:20:57 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.541 18:20:57 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.541 18:20:57 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.541 18:20:57 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.541 18:20:57 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.541 18:20:57 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:12.541 18:20:57 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:12.541 [2024-07-15 18:20:57.925102] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:12.541 [2024-07-15 18:20:57.925150] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3749853 ] 00:06:12.541 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.541 [2024-07-15 18:20:57.992705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.541 [2024-07-15 18:20:58.064214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:12.800 18:20:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.735 18:20:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:13.735 18:20:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.735 18:20:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.735 18:20:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.735 18:20:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:13.735 18:20:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.735 18:20:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.735 18:20:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.735 18:20:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:13.735 18:20:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.735 18:20:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.736 18:20:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.736 18:20:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:13.736 18:20:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.736 18:20:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.736 18:20:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.736 18:20:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:13.736 18:20:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.736 18:20:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.736 18:20:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.736 18:20:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:13.736 18:20:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.736 18:20:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.736 18:20:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.736 18:20:59 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.736 18:20:59 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:13.736 18:20:59 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.736 00:06:13.736 real 0m1.345s 00:06:13.736 user 0m1.229s 00:06:13.736 sys 0m0.128s 00:06:13.736 18:20:59 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.736 18:20:59 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:13.736 ************************************ 00:06:13.736 END TEST accel_compare 00:06:13.736 ************************************ 00:06:13.736 18:20:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:13.736 18:20:59 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:13.736 18:20:59 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:13.736 18:20:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.736 18:20:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.996 ************************************ 00:06:13.996 START TEST accel_xor 00:06:13.996 ************************************ 00:06:13.996 18:20:59 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:13.996 [2024-07-15 18:20:59.339268] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:13.996 [2024-07-15 18:20:59.339345] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3750101 ] 00:06:13.996 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.996 [2024-07-15 18:20:59.389237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.996 [2024-07-15 18:20:59.462915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.996 18:20:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.372 18:21:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.372 18:21:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.372 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.372 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.372 18:21:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.372 18:21:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.372 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.372 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.372 18:21:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.372 18:21:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.372 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.372 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.372 18:21:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.372 18:21:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.372 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.372 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.372 18:21:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.372 18:21:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.372 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.372 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.372 18:21:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.372 18:21:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.372 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.372 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.372 18:21:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.372 18:21:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:15.372 18:21:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.372 00:06:15.372 real 0m1.334s 00:06:15.372 user 0m1.238s 00:06:15.372 sys 0m0.109s 00:06:15.372 18:21:00 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.372 18:21:00 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:15.372 ************************************ 00:06:15.372 END TEST accel_xor 00:06:15.372 ************************************ 00:06:15.372 18:21:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:15.372 18:21:00 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:15.372 18:21:00 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:15.372 18:21:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.372 18:21:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.373 ************************************ 00:06:15.373 START TEST accel_xor 00:06:15.373 ************************************ 00:06:15.373 18:21:00 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:15.373 18:21:00 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:15.373 18:21:00 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:15.373 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.373 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.373 18:21:00 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:15.373 18:21:00 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:15.373 18:21:00 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:15.373 18:21:00 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.373 18:21:00 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.373 18:21:00 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.373 18:21:00 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.373 18:21:00 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.373 18:21:00 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:15.373 18:21:00 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:15.373 [2024-07-15 18:21:00.739303] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:15.373 [2024-07-15 18:21:00.739364] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3750399 ] 00:06:15.373 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.373 [2024-07-15 18:21:00.807984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.373 [2024-07-15 18:21:00.889378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.630 18:21:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.562 18:21:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.562 18:21:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.562 18:21:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.562 18:21:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.562 18:21:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.562 18:21:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.562 18:21:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.562 18:21:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.562 18:21:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.562 18:21:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.562 18:21:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.562 18:21:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.562 18:21:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.562 18:21:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.562 18:21:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.562 18:21:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.562 18:21:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.562 18:21:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.562 18:21:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.562 18:21:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.562 18:21:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.562 18:21:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.562 18:21:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.562 18:21:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.562 18:21:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.562 18:21:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:16.562 18:21:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.562 00:06:16.562 real 0m1.356s 00:06:16.562 user 0m1.254s 00:06:16.562 sys 0m0.115s 00:06:16.562 18:21:02 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.562 18:21:02 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:16.562 ************************************ 00:06:16.562 END TEST accel_xor 00:06:16.562 ************************************ 00:06:16.562 18:21:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:16.562 18:21:02 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:16.562 18:21:02 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:16.562 18:21:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.562 18:21:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.820 ************************************ 00:06:16.820 START TEST accel_dif_verify 00:06:16.820 ************************************ 00:06:16.820 18:21:02 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:16.820 [2024-07-15 18:21:02.165593] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:16.820 [2024-07-15 18:21:02.165641] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3750730 ] 00:06:16.820 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.820 [2024-07-15 18:21:02.232620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.820 [2024-07-15 18:21:02.307056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.820 18:21:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.276 18:21:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:18.276 18:21:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.276 18:21:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.276 18:21:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.276 18:21:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:18.276 18:21:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.276 18:21:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.276 18:21:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.276 18:21:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:18.276 18:21:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.276 18:21:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.276 18:21:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.276 18:21:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:18.276 18:21:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.276 18:21:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.276 18:21:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.276 18:21:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:18.276 18:21:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.276 18:21:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.276 18:21:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.276 18:21:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:18.276 18:21:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.276 18:21:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.276 18:21:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.276 18:21:03 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.276 18:21:03 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:18.276 18:21:03 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.276 00:06:18.276 real 0m1.348s 00:06:18.276 user 0m1.244s 00:06:18.276 sys 0m0.117s 00:06:18.276 18:21:03 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.276 18:21:03 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:18.276 ************************************ 00:06:18.276 END TEST accel_dif_verify 00:06:18.276 ************************************ 00:06:18.276 18:21:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:18.276 18:21:03 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:18.276 18:21:03 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:18.276 18:21:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.276 18:21:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.276 ************************************ 00:06:18.276 START TEST accel_dif_generate 00:06:18.276 ************************************ 00:06:18.276 18:21:03 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:18.276 [2024-07-15 18:21:03.582408] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:18.276 [2024-07-15 18:21:03.582477] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3750985 ] 00:06:18.276 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.276 [2024-07-15 18:21:03.650146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.276 [2024-07-15 18:21:03.720137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:18.276 18:21:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.649 18:21:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.649 18:21:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.649 18:21:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.649 18:21:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.649 18:21:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.649 18:21:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.649 18:21:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.649 18:21:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.649 18:21:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.650 18:21:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.650 18:21:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.650 18:21:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.650 18:21:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.650 18:21:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.650 18:21:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.650 18:21:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.650 18:21:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.650 18:21:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.650 18:21:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.650 18:21:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.650 18:21:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.650 18:21:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.650 18:21:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.650 18:21:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.650 18:21:04 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.650 18:21:04 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:19.650 18:21:04 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.650 00:06:19.650 real 0m1.346s 00:06:19.650 user 0m1.231s 00:06:19.650 sys 0m0.127s 00:06:19.650 18:21:04 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.650 18:21:04 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:19.650 ************************************ 00:06:19.650 END TEST accel_dif_generate 00:06:19.650 ************************************ 00:06:19.650 18:21:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.650 18:21:04 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:19.650 18:21:04 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:19.650 18:21:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.650 18:21:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.650 ************************************ 00:06:19.650 START TEST accel_dif_generate_copy 00:06:19.650 ************************************ 00:06:19.650 18:21:04 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:19.650 18:21:04 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:19.650 18:21:04 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:19.650 18:21:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.650 18:21:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.650 18:21:04 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:19.650 18:21:04 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:19.650 18:21:04 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:19.650 18:21:04 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.650 18:21:04 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.650 18:21:04 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.650 18:21:04 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.650 18:21:04 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.650 18:21:04 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:19.650 18:21:04 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:19.650 [2024-07-15 18:21:04.989266] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:19.650 [2024-07-15 18:21:04.989317] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3751232 ] 00:06:19.650 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.650 [2024-07-15 18:21:05.054454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.650 [2024-07-15 18:21:05.125048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.650 18:21:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.023 18:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:21.023 18:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.023 18:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.023 18:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.023 18:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:21.023 18:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.023 18:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.023 18:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.023 18:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:21.023 18:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.023 18:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.023 18:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.023 18:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:21.023 18:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.023 18:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.023 18:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.023 18:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:21.023 18:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.023 18:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.023 18:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.023 18:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:21.023 18:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.023 18:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.023 18:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.023 18:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.023 18:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:21.023 18:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.023 00:06:21.023 real 0m1.340s 00:06:21.023 user 0m1.226s 00:06:21.023 sys 0m0.126s 00:06:21.023 18:21:06 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.023 18:21:06 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:21.023 ************************************ 00:06:21.023 END TEST accel_dif_generate_copy 00:06:21.023 ************************************ 00:06:21.023 18:21:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.023 18:21:06 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:21.023 18:21:06 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:21.023 18:21:06 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:21.023 18:21:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.023 18:21:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.023 ************************************ 00:06:21.023 START TEST accel_comp 00:06:21.023 ************************************ 00:06:21.023 18:21:06 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:21.023 [2024-07-15 18:21:06.394531] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:21.023 [2024-07-15 18:21:06.394579] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3751479 ] 00:06:21.023 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.023 [2024-07-15 18:21:06.460783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.023 [2024-07-15 18:21:06.531758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:21.023 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:21.301 18:21:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.236 18:21:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:22.236 18:21:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.236 18:21:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.236 18:21:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.236 18:21:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:22.236 18:21:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.236 18:21:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.236 18:21:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.236 18:21:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:22.236 18:21:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.236 18:21:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.236 18:21:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.236 18:21:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:22.236 18:21:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.236 18:21:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.236 18:21:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.236 18:21:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:22.236 18:21:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.236 18:21:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.236 18:21:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.236 18:21:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:22.236 18:21:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.236 18:21:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.236 18:21:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.236 18:21:07 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.236 18:21:07 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:22.236 18:21:07 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.236 00:06:22.236 real 0m1.345s 00:06:22.236 user 0m1.230s 00:06:22.236 sys 0m0.127s 00:06:22.236 18:21:07 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.236 18:21:07 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:22.236 ************************************ 00:06:22.236 END TEST accel_comp 00:06:22.236 ************************************ 00:06:22.236 18:21:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:22.236 18:21:07 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:22.236 18:21:07 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:22.236 18:21:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.236 18:21:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.236 ************************************ 00:06:22.236 START TEST accel_decomp 00:06:22.236 ************************************ 00:06:22.236 18:21:07 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:22.236 18:21:07 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:22.236 18:21:07 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:22.236 18:21:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.236 18:21:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.236 18:21:07 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:22.236 18:21:07 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:22.236 18:21:07 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:22.236 18:21:07 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.236 18:21:07 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.236 18:21:07 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.236 18:21:07 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.236 18:21:07 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.236 18:21:07 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:22.236 18:21:07 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:22.494 [2024-07-15 18:21:07.806958] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:22.494 [2024-07-15 18:21:07.807026] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3751733 ] 00:06:22.494 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.494 [2024-07-15 18:21:07.875702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.494 [2024-07-15 18:21:07.946926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:22.494 18:21:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.494 18:21:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.494 18:21:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.494 18:21:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:22.494 18:21:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.494 18:21:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.494 18:21:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.494 18:21:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:22.494 18:21:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.494 18:21:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.494 18:21:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.494 18:21:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:22.494 18:21:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.494 18:21:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.494 18:21:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.494 18:21:08 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.494 18:21:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.494 18:21:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.494 18:21:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.494 18:21:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:22.494 18:21:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.494 18:21:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.494 18:21:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.494 18:21:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:22.494 18:21:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.494 18:21:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.494 18:21:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.494 18:21:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:22.494 18:21:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.494 18:21:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.494 18:21:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:23.868 18:21:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:23.868 18:21:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.868 18:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:23.868 18:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:23.868 18:21:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:23.868 18:21:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.868 18:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:23.868 18:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:23.868 18:21:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:23.868 18:21:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.868 18:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:23.868 18:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:23.868 18:21:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:23.868 18:21:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.868 18:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:23.868 18:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:23.868 18:21:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:23.868 18:21:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.868 18:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:23.868 18:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:23.868 18:21:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:23.868 18:21:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.868 18:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:23.868 18:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:23.868 18:21:09 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.868 18:21:09 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:23.868 18:21:09 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.868 00:06:23.868 real 0m1.352s 00:06:23.868 user 0m1.246s 00:06:23.868 sys 0m0.117s 00:06:23.868 18:21:09 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.868 18:21:09 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:23.868 ************************************ 00:06:23.868 END TEST accel_decomp 00:06:23.868 ************************************ 00:06:23.868 18:21:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:23.869 18:21:09 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:23.869 18:21:09 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:23.869 18:21:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.869 18:21:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.869 ************************************ 00:06:23.869 START TEST accel_decomp_full 00:06:23.869 ************************************ 00:06:23.869 18:21:09 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:23.869 [2024-07-15 18:21:09.225069] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:23.869 [2024-07-15 18:21:09.225117] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3752248 ] 00:06:23.869 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.869 [2024-07-15 18:21:09.291574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.869 [2024-07-15 18:21:09.363334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.869 18:21:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.249 18:21:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:25.249 18:21:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.249 18:21:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.249 18:21:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.249 18:21:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:25.249 18:21:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.249 18:21:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.249 18:21:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.249 18:21:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:25.249 18:21:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.249 18:21:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.249 18:21:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.249 18:21:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:25.249 18:21:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.249 18:21:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.249 18:21:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.249 18:21:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:25.249 18:21:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.249 18:21:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.249 18:21:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.249 18:21:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:25.249 18:21:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.249 18:21:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.249 18:21:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.249 18:21:10 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.249 18:21:10 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:25.249 18:21:10 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.249 00:06:25.249 real 0m1.357s 00:06:25.249 user 0m1.240s 00:06:25.249 sys 0m0.129s 00:06:25.249 18:21:10 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.249 18:21:10 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:25.249 ************************************ 00:06:25.249 END TEST accel_decomp_full 00:06:25.249 ************************************ 00:06:25.249 18:21:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:25.249 18:21:10 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:25.249 18:21:10 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:25.249 18:21:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.249 18:21:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.249 ************************************ 00:06:25.249 START TEST accel_decomp_mcore 00:06:25.249 ************************************ 00:06:25.249 18:21:10 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:25.249 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:25.249 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:25.249 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.249 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.249 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:25.249 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:25.249 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:25.249 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.249 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.249 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.249 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.249 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.249 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:25.249 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:25.249 [2024-07-15 18:21:10.653335] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:25.249 [2024-07-15 18:21:10.653413] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3752617 ] 00:06:25.249 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.249 [2024-07-15 18:21:10.716137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:25.249 [2024-07-15 18:21:10.794726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.249 [2024-07-15 18:21:10.794857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.249 [2024-07-15 18:21:10.794963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.249 [2024-07-15 18:21:10.794964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.508 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.509 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:25.509 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.509 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.509 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.509 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:25.509 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.509 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.509 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.509 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:25.509 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.509 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.509 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.509 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.509 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.509 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.509 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.509 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:25.509 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.509 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.509 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.509 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:25.509 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.509 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.509 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.509 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:25.509 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.509 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.509 18:21:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.444 00:06:26.444 real 0m1.360s 00:06:26.444 user 0m4.563s 00:06:26.444 sys 0m0.131s 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.444 18:21:11 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:26.445 ************************************ 00:06:26.445 END TEST accel_decomp_mcore 00:06:26.445 ************************************ 00:06:26.704 18:21:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:26.704 18:21:12 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:26.704 18:21:12 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:26.704 18:21:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.704 18:21:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.704 ************************************ 00:06:26.704 START TEST accel_decomp_full_mcore 00:06:26.704 ************************************ 00:06:26.704 18:21:12 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:26.704 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:26.704 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:26.704 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.704 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.704 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:26.704 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:26.704 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:26.704 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.704 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.704 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.704 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.704 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.704 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:26.704 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:26.704 [2024-07-15 18:21:12.079494] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:26.704 [2024-07-15 18:21:12.079556] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3752868 ] 00:06:26.704 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.704 [2024-07-15 18:21:12.146973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:26.704 [2024-07-15 18:21:12.222645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.704 [2024-07-15 18:21:12.222754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.704 [2024-07-15 18:21:12.222837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.704 [2024-07-15 18:21:12.222837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.963 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:26.963 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.963 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.963 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.963 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:26.963 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.964 18:21:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.913 00:06:27.913 real 0m1.370s 00:06:27.913 user 0m4.607s 00:06:27.913 sys 0m0.134s 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.913 18:21:13 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:27.913 ************************************ 00:06:27.913 END TEST accel_decomp_full_mcore 00:06:27.913 ************************************ 00:06:27.913 18:21:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:27.913 18:21:13 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:27.913 18:21:13 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:27.913 18:21:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.913 18:21:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.175 ************************************ 00:06:28.175 START TEST accel_decomp_mthread 00:06:28.175 ************************************ 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:28.175 [2024-07-15 18:21:13.514256] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:28.175 [2024-07-15 18:21:13.514303] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3753122 ] 00:06:28.175 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.175 [2024-07-15 18:21:13.579972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.175 [2024-07-15 18:21:13.650878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.175 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:28.176 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.176 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.176 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.176 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:28.176 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.176 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.176 18:21:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.551 00:06:29.551 real 0m1.345s 00:06:29.551 user 0m1.231s 00:06:29.551 sys 0m0.128s 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.551 18:21:14 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:29.551 ************************************ 00:06:29.551 END TEST accel_decomp_mthread 00:06:29.551 ************************************ 00:06:29.551 18:21:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:29.551 18:21:14 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:29.551 18:21:14 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:29.551 18:21:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.551 18:21:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.551 ************************************ 00:06:29.551 START TEST accel_decomp_full_mthread 00:06:29.551 ************************************ 00:06:29.551 18:21:14 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:29.551 18:21:14 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:29.551 18:21:14 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:29.551 18:21:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.551 18:21:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.551 18:21:14 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:29.551 18:21:14 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:29.551 18:21:14 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:29.551 18:21:14 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.551 18:21:14 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.551 18:21:14 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.551 18:21:14 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.551 18:21:14 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.551 18:21:14 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:29.551 18:21:14 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:29.551 [2024-07-15 18:21:14.926795] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:29.551 [2024-07-15 18:21:14.926860] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3753373 ] 00:06:29.551 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.551 [2024-07-15 18:21:14.994173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.551 [2024-07-15 18:21:15.065316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.810 18:21:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.747 00:06:30.747 real 0m1.372s 00:06:30.747 user 0m1.259s 00:06:30.747 sys 0m0.125s 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.747 18:21:16 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:30.747 ************************************ 00:06:30.747 END TEST accel_decomp_full_mthread 00:06:30.747 ************************************ 00:06:30.747 18:21:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:31.006 18:21:16 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:31.006 18:21:16 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:31.006 18:21:16 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:31.006 18:21:16 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:31.006 18:21:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.006 18:21:16 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.006 18:21:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.006 18:21:16 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.006 18:21:16 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.006 18:21:16 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.006 18:21:16 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.006 18:21:16 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:31.006 18:21:16 accel -- accel/accel.sh@41 -- # jq -r . 00:06:31.006 ************************************ 00:06:31.006 START TEST accel_dif_functional_tests 00:06:31.006 ************************************ 00:06:31.006 18:21:16 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:31.006 [2024-07-15 18:21:16.381581] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:31.006 [2024-07-15 18:21:16.381614] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3753625 ] 00:06:31.006 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.006 [2024-07-15 18:21:16.446252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.007 [2024-07-15 18:21:16.518463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.007 [2024-07-15 18:21:16.518569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.007 [2024-07-15 18:21:16.518570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.266 00:06:31.266 00:06:31.266 CUnit - A unit testing framework for C - Version 2.1-3 00:06:31.266 http://cunit.sourceforge.net/ 00:06:31.266 00:06:31.266 00:06:31.266 Suite: accel_dif 00:06:31.266 Test: verify: DIF generated, GUARD check ...passed 00:06:31.266 Test: verify: DIF generated, APPTAG check ...passed 00:06:31.266 Test: verify: DIF generated, REFTAG check ...passed 00:06:31.266 Test: verify: DIF not generated, GUARD check ...[2024-07-15 18:21:16.586474] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:31.266 passed 00:06:31.266 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 18:21:16.586519] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:31.266 passed 00:06:31.266 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 18:21:16.586538] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:31.266 passed 00:06:31.266 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:31.266 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 18:21:16.586578] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:31.266 passed 00:06:31.266 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:31.266 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:31.266 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:31.266 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 18:21:16.586670] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:31.266 passed 00:06:31.266 Test: verify copy: DIF generated, GUARD check ...passed 00:06:31.266 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:31.266 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:31.266 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 18:21:16.586776] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:31.266 passed 00:06:31.266 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 18:21:16.586798] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:31.266 passed 00:06:31.266 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 18:21:16.586816] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:31.266 passed 00:06:31.266 Test: generate copy: DIF generated, GUARD check ...passed 00:06:31.266 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:31.266 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:31.266 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:31.266 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:31.266 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:31.266 Test: generate copy: iovecs-len validate ...[2024-07-15 18:21:16.586972] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:31.266 passed 00:06:31.266 Test: generate copy: buffer alignment validate ...passed 00:06:31.266 00:06:31.266 Run Summary: Type Total Ran Passed Failed Inactive 00:06:31.266 suites 1 1 n/a 0 0 00:06:31.266 tests 26 26 26 0 0 00:06:31.266 asserts 115 115 115 0 n/a 00:06:31.266 00:06:31.266 Elapsed time = 0.000 seconds 00:06:31.266 00:06:31.266 real 0m0.416s 00:06:31.266 user 0m0.624s 00:06:31.266 sys 0m0.145s 00:06:31.266 18:21:16 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.266 18:21:16 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:31.266 ************************************ 00:06:31.266 END TEST accel_dif_functional_tests 00:06:31.266 ************************************ 00:06:31.266 18:21:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:31.266 00:06:31.266 real 0m31.273s 00:06:31.266 user 0m34.821s 00:06:31.266 sys 0m4.489s 00:06:31.266 18:21:16 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.266 18:21:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.266 ************************************ 00:06:31.266 END TEST accel 00:06:31.266 ************************************ 00:06:31.266 18:21:16 -- common/autotest_common.sh@1142 -- # return 0 00:06:31.266 18:21:16 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:31.266 18:21:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.266 18:21:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.266 18:21:16 -- common/autotest_common.sh@10 -- # set +x 00:06:31.525 ************************************ 00:06:31.525 START TEST accel_rpc 00:06:31.525 ************************************ 00:06:31.525 18:21:16 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:31.525 * Looking for test storage... 00:06:31.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:31.525 18:21:16 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:31.525 18:21:16 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3753709 00:06:31.525 18:21:16 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3753709 00:06:31.525 18:21:16 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:31.525 18:21:16 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 3753709 ']' 00:06:31.525 18:21:16 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.525 18:21:16 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.525 18:21:16 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.525 18:21:16 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.525 18:21:16 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.525 [2024-07-15 18:21:16.997297] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:31.525 [2024-07-15 18:21:16.997360] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3753709 ] 00:06:31.525 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.525 [2024-07-15 18:21:17.063705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.784 [2024-07-15 18:21:17.143128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.353 18:21:17 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.353 18:21:17 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:32.353 18:21:17 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:32.353 18:21:17 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:32.353 18:21:17 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:32.353 18:21:17 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:32.353 18:21:17 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:32.353 18:21:17 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.353 18:21:17 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.353 18:21:17 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.353 ************************************ 00:06:32.353 START TEST accel_assign_opcode 00:06:32.353 ************************************ 00:06:32.353 18:21:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:32.353 18:21:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:32.353 18:21:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.353 18:21:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:32.353 [2024-07-15 18:21:17.821137] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:32.353 18:21:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.353 18:21:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:32.353 18:21:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.353 18:21:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:32.353 [2024-07-15 18:21:17.829151] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:32.353 18:21:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.353 18:21:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:32.353 18:21:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.353 18:21:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:32.612 18:21:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.612 18:21:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:32.612 18:21:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:32.612 18:21:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.612 18:21:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:32.612 18:21:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:32.612 18:21:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.612 software 00:06:32.612 00:06:32.612 real 0m0.232s 00:06:32.612 user 0m0.045s 00:06:32.612 sys 0m0.009s 00:06:32.612 18:21:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.612 18:21:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:32.612 ************************************ 00:06:32.612 END TEST accel_assign_opcode 00:06:32.612 ************************************ 00:06:32.612 18:21:18 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:32.612 18:21:18 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3753709 00:06:32.612 18:21:18 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 3753709 ']' 00:06:32.612 18:21:18 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 3753709 00:06:32.612 18:21:18 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:32.612 18:21:18 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:32.612 18:21:18 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3753709 00:06:32.612 18:21:18 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:32.612 18:21:18 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:32.612 18:21:18 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3753709' 00:06:32.612 killing process with pid 3753709 00:06:32.612 18:21:18 accel_rpc -- common/autotest_common.sh@967 -- # kill 3753709 00:06:32.612 18:21:18 accel_rpc -- common/autotest_common.sh@972 -- # wait 3753709 00:06:32.871 00:06:32.871 real 0m1.572s 00:06:32.871 user 0m1.642s 00:06:32.871 sys 0m0.415s 00:06:32.871 18:21:18 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.130 18:21:18 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.130 ************************************ 00:06:33.130 END TEST accel_rpc 00:06:33.130 ************************************ 00:06:33.130 18:21:18 -- common/autotest_common.sh@1142 -- # return 0 00:06:33.130 18:21:18 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:33.130 18:21:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.130 18:21:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.130 18:21:18 -- common/autotest_common.sh@10 -- # set +x 00:06:33.130 ************************************ 00:06:33.130 START TEST app_cmdline 00:06:33.130 ************************************ 00:06:33.130 18:21:18 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:33.130 * Looking for test storage... 00:06:33.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:33.130 18:21:18 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:33.130 18:21:18 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3754162 00:06:33.130 18:21:18 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3754162 00:06:33.130 18:21:18 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:33.130 18:21:18 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 3754162 ']' 00:06:33.130 18:21:18 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.130 18:21:18 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.130 18:21:18 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.130 18:21:18 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.130 18:21:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:33.130 [2024-07-15 18:21:18.633531] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:33.130 [2024-07-15 18:21:18.633577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3754162 ] 00:06:33.130 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.389 [2024-07-15 18:21:18.696875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.389 [2024-07-15 18:21:18.778364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.955 18:21:19 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.955 18:21:19 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:33.955 18:21:19 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:34.214 { 00:06:34.214 "version": "SPDK v24.09-pre git sha1 bdeef1ed3", 00:06:34.214 "fields": { 00:06:34.214 "major": 24, 00:06:34.214 "minor": 9, 00:06:34.214 "patch": 0, 00:06:34.214 "suffix": "-pre", 00:06:34.214 "commit": "bdeef1ed3" 00:06:34.214 } 00:06:34.214 } 00:06:34.214 18:21:19 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:34.214 18:21:19 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:34.214 18:21:19 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:34.214 18:21:19 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:34.214 18:21:19 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:34.214 18:21:19 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:34.214 18:21:19 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:34.214 18:21:19 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.214 18:21:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:34.214 18:21:19 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.214 18:21:19 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:34.214 18:21:19 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:34.214 18:21:19 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:34.214 18:21:19 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:34.214 18:21:19 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:34.214 18:21:19 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:34.214 18:21:19 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.214 18:21:19 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:34.214 18:21:19 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.214 18:21:19 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:34.214 18:21:19 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.214 18:21:19 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:34.214 18:21:19 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:34.214 18:21:19 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:34.473 request: 00:06:34.473 { 00:06:34.473 "method": "env_dpdk_get_mem_stats", 00:06:34.473 "req_id": 1 00:06:34.473 } 00:06:34.473 Got JSON-RPC error response 00:06:34.473 response: 00:06:34.473 { 00:06:34.473 "code": -32601, 00:06:34.473 "message": "Method not found" 00:06:34.473 } 00:06:34.473 18:21:19 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:34.473 18:21:19 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:34.473 18:21:19 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:34.473 18:21:19 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:34.473 18:21:19 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3754162 00:06:34.473 18:21:19 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 3754162 ']' 00:06:34.473 18:21:19 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 3754162 00:06:34.473 18:21:19 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:34.473 18:21:19 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:34.473 18:21:19 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3754162 00:06:34.473 18:21:19 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:34.473 18:21:19 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:34.473 18:21:19 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3754162' 00:06:34.473 killing process with pid 3754162 00:06:34.473 18:21:19 app_cmdline -- common/autotest_common.sh@967 -- # kill 3754162 00:06:34.473 18:21:19 app_cmdline -- common/autotest_common.sh@972 -- # wait 3754162 00:06:34.732 00:06:34.732 real 0m1.694s 00:06:34.732 user 0m2.035s 00:06:34.732 sys 0m0.432s 00:06:34.732 18:21:20 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.732 18:21:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:34.732 ************************************ 00:06:34.732 END TEST app_cmdline 00:06:34.732 ************************************ 00:06:34.732 18:21:20 -- common/autotest_common.sh@1142 -- # return 0 00:06:34.732 18:21:20 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:34.732 18:21:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.732 18:21:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.732 18:21:20 -- common/autotest_common.sh@10 -- # set +x 00:06:34.732 ************************************ 00:06:34.732 START TEST version 00:06:34.732 ************************************ 00:06:34.732 18:21:20 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:34.991 * Looking for test storage... 00:06:34.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:34.991 18:21:20 version -- app/version.sh@17 -- # get_header_version major 00:06:34.991 18:21:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:34.991 18:21:20 version -- app/version.sh@14 -- # cut -f2 00:06:34.991 18:21:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:34.991 18:21:20 version -- app/version.sh@17 -- # major=24 00:06:34.991 18:21:20 version -- app/version.sh@18 -- # get_header_version minor 00:06:34.991 18:21:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:34.991 18:21:20 version -- app/version.sh@14 -- # cut -f2 00:06:34.991 18:21:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:34.991 18:21:20 version -- app/version.sh@18 -- # minor=9 00:06:34.991 18:21:20 version -- app/version.sh@19 -- # get_header_version patch 00:06:34.991 18:21:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:34.991 18:21:20 version -- app/version.sh@14 -- # cut -f2 00:06:34.991 18:21:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:34.991 18:21:20 version -- app/version.sh@19 -- # patch=0 00:06:34.991 18:21:20 version -- app/version.sh@20 -- # get_header_version suffix 00:06:34.991 18:21:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:34.991 18:21:20 version -- app/version.sh@14 -- # cut -f2 00:06:34.991 18:21:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:34.991 18:21:20 version -- app/version.sh@20 -- # suffix=-pre 00:06:34.991 18:21:20 version -- app/version.sh@22 -- # version=24.9 00:06:34.991 18:21:20 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:34.991 18:21:20 version -- app/version.sh@28 -- # version=24.9rc0 00:06:34.991 18:21:20 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:34.991 18:21:20 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:34.991 18:21:20 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:34.991 18:21:20 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:34.991 00:06:34.991 real 0m0.158s 00:06:34.991 user 0m0.081s 00:06:34.991 sys 0m0.115s 00:06:34.991 18:21:20 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.991 18:21:20 version -- common/autotest_common.sh@10 -- # set +x 00:06:34.991 ************************************ 00:06:34.991 END TEST version 00:06:34.991 ************************************ 00:06:34.991 18:21:20 -- common/autotest_common.sh@1142 -- # return 0 00:06:34.991 18:21:20 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:34.991 18:21:20 -- spdk/autotest.sh@198 -- # uname -s 00:06:34.991 18:21:20 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:34.991 18:21:20 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:34.991 18:21:20 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:34.991 18:21:20 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:34.991 18:21:20 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:34.991 18:21:20 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:34.991 18:21:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:34.991 18:21:20 -- common/autotest_common.sh@10 -- # set +x 00:06:34.991 18:21:20 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:34.991 18:21:20 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:34.991 18:21:20 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:34.991 18:21:20 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:34.991 18:21:20 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:34.991 18:21:20 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:34.991 18:21:20 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:34.991 18:21:20 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:34.991 18:21:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.991 18:21:20 -- common/autotest_common.sh@10 -- # set +x 00:06:34.991 ************************************ 00:06:34.991 START TEST nvmf_tcp 00:06:34.991 ************************************ 00:06:34.991 18:21:20 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:35.251 * Looking for test storage... 00:06:35.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:35.251 18:21:20 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:35.251 18:21:20 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:35.251 18:21:20 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:35.251 18:21:20 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.251 18:21:20 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.251 18:21:20 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.251 18:21:20 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:35.251 18:21:20 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:35.251 18:21:20 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:35.251 18:21:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:35.251 18:21:20 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:35.251 18:21:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:35.251 18:21:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.251 18:21:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:35.251 ************************************ 00:06:35.251 START TEST nvmf_example 00:06:35.251 ************************************ 00:06:35.251 18:21:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:35.251 * Looking for test storage... 00:06:35.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:35.251 18:21:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:35.251 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:35.251 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:35.251 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:35.251 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:35.252 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:35.252 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:35.252 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:35.252 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:35.252 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:35.252 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:35.252 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:35.252 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:35.252 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:35.252 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:35.252 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:35.252 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:35.252 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:35.252 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:35.252 18:21:20 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:35.252 18:21:20 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:35.252 18:21:20 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:35.252 18:21:20 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.252 18:21:20 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.252 18:21:20 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.252 18:21:20 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:35.252 18:21:20 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.252 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:35.252 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:35.252 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:35.252 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:35.252 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:35.252 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:35.252 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:35.252 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:35.252 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:35.512 18:21:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:35.512 18:21:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:35.512 18:21:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:35.512 18:21:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:35.512 18:21:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:35.512 18:21:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:35.512 18:21:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:35.512 18:21:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:35.512 18:21:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:35.512 18:21:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:35.512 18:21:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:35.512 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:35.512 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:35.512 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:35.512 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:35.512 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:35.512 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:35.512 18:21:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:35.512 18:21:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:35.512 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:35.512 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:35.512 18:21:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:35.512 18:21:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:40.783 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:40.783 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:40.783 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:40.784 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:40.784 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:40.784 Found net devices under 0000:86:00.0: cvl_0_0 00:06:40.784 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:40.784 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:41.042 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:41.042 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:41.042 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:41.042 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:41.042 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:41.042 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:41.042 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:41.042 Found net devices under 0000:86:00.1: cvl_0_1 00:06:41.042 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:41.042 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:41.042 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:41.042 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:41.042 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:41.042 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:41.042 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:41.042 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:41.042 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:41.042 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:41.042 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:41.042 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:41.042 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:41.042 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:41.042 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:41.042 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:41.042 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:41.042 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:41.042 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:41.042 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:41.042 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:41.043 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:41.043 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:41.043 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:41.043 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:41.301 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:41.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:41.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:06:41.301 00:06:41.301 --- 10.0.0.2 ping statistics --- 00:06:41.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:41.301 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:06:41.301 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:41.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:41.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:06:41.301 00:06:41.301 --- 10.0.0.1 ping statistics --- 00:06:41.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:41.301 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:06:41.301 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:41.301 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:41.301 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:41.301 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:41.301 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:41.301 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:41.301 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:41.301 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:41.301 18:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:41.301 18:21:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:41.301 18:21:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:41.301 18:21:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:41.301 18:21:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:41.301 18:21:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:41.301 18:21:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:41.301 18:21:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3757616 00:06:41.301 18:21:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:41.301 18:21:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:41.301 18:21:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3757616 00:06:41.301 18:21:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 3757616 ']' 00:06:41.301 18:21:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.301 18:21:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.301 18:21:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.301 18:21:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.301 18:21:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:41.301 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.234 18:21:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.234 18:21:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:42.235 18:21:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:42.235 18:21:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:42.235 18:21:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:42.235 18:21:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:42.235 18:21:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.235 18:21:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:42.235 18:21:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.235 18:21:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:42.235 18:21:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.235 18:21:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:42.235 18:21:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.235 18:21:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:42.235 18:21:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:42.235 18:21:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.235 18:21:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:42.235 18:21:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.235 18:21:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:42.235 18:21:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:42.235 18:21:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.235 18:21:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:42.235 18:21:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.235 18:21:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:42.235 18:21:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.235 18:21:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:42.235 18:21:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.235 18:21:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:42.235 18:21:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:42.235 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.438 Initializing NVMe Controllers 00:06:54.438 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:54.438 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:54.438 Initialization complete. Launching workers. 00:06:54.438 ======================================================== 00:06:54.438 Latency(us) 00:06:54.438 Device Information : IOPS MiB/s Average min max 00:06:54.438 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18395.66 71.86 3478.90 521.47 16273.39 00:06:54.438 ======================================================== 00:06:54.438 Total : 18395.66 71.86 3478.90 521.47 16273.39 00:06:54.438 00:06:54.438 18:21:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:54.438 18:21:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:54.438 18:21:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:54.438 18:21:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:54.438 18:21:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:54.438 18:21:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:54.438 18:21:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:54.438 18:21:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:54.438 rmmod nvme_tcp 00:06:54.438 rmmod nvme_fabrics 00:06:54.438 rmmod nvme_keyring 00:06:54.438 18:21:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:54.438 18:21:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:54.438 18:21:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:54.438 18:21:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3757616 ']' 00:06:54.438 18:21:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3757616 00:06:54.438 18:21:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 3757616 ']' 00:06:54.438 18:21:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 3757616 00:06:54.438 18:21:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:54.438 18:21:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:54.438 18:21:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3757616 00:06:54.438 18:21:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:54.438 18:21:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:54.438 18:21:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3757616' 00:06:54.438 killing process with pid 3757616 00:06:54.438 18:21:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 3757616 00:06:54.438 18:21:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 3757616 00:06:54.438 nvmf threads initialize successfully 00:06:54.438 bdev subsystem init successfully 00:06:54.438 created a nvmf target service 00:06:54.438 create targets's poll groups done 00:06:54.438 all subsystems of target started 00:06:54.438 nvmf target is running 00:06:54.438 all subsystems of target stopped 00:06:54.438 destroy targets's poll groups done 00:06:54.438 destroyed the nvmf target service 00:06:54.438 bdev subsystem finish successfully 00:06:54.438 nvmf threads destroy successfully 00:06:54.438 18:21:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:54.438 18:21:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:54.438 18:21:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:54.438 18:21:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:54.438 18:21:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:54.438 18:21:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:54.438 18:21:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:54.438 18:21:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:54.696 18:21:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:54.696 18:21:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:54.696 18:21:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:54.696 18:21:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:54.696 00:06:54.696 real 0m19.496s 00:06:54.696 user 0m45.728s 00:06:54.696 sys 0m5.776s 00:06:54.696 18:21:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.696 18:21:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:54.696 ************************************ 00:06:54.696 END TEST nvmf_example 00:06:54.696 ************************************ 00:06:54.696 18:21:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:54.696 18:21:40 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:54.696 18:21:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:54.696 18:21:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.696 18:21:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:54.963 ************************************ 00:06:54.963 START TEST nvmf_filesystem 00:06:54.963 ************************************ 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:54.963 * Looking for test storage... 00:06:54.963 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:54.963 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:54.964 #define SPDK_CONFIG_H 00:06:54.964 #define SPDK_CONFIG_APPS 1 00:06:54.964 #define SPDK_CONFIG_ARCH native 00:06:54.964 #undef SPDK_CONFIG_ASAN 00:06:54.964 #undef SPDK_CONFIG_AVAHI 00:06:54.964 #undef SPDK_CONFIG_CET 00:06:54.964 #define SPDK_CONFIG_COVERAGE 1 00:06:54.964 #define SPDK_CONFIG_CROSS_PREFIX 00:06:54.964 #undef SPDK_CONFIG_CRYPTO 00:06:54.964 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:54.964 #undef SPDK_CONFIG_CUSTOMOCF 00:06:54.964 #undef SPDK_CONFIG_DAOS 00:06:54.964 #define SPDK_CONFIG_DAOS_DIR 00:06:54.964 #define SPDK_CONFIG_DEBUG 1 00:06:54.964 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:54.964 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:54.964 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:54.964 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:54.964 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:54.964 #undef SPDK_CONFIG_DPDK_UADK 00:06:54.964 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:54.964 #define SPDK_CONFIG_EXAMPLES 1 00:06:54.964 #undef SPDK_CONFIG_FC 00:06:54.964 #define SPDK_CONFIG_FC_PATH 00:06:54.964 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:54.964 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:54.964 #undef SPDK_CONFIG_FUSE 00:06:54.964 #undef SPDK_CONFIG_FUZZER 00:06:54.964 #define SPDK_CONFIG_FUZZER_LIB 00:06:54.964 #undef SPDK_CONFIG_GOLANG 00:06:54.964 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:54.964 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:54.964 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:54.964 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:54.964 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:54.964 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:54.964 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:54.964 #define SPDK_CONFIG_IDXD 1 00:06:54.964 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:54.964 #undef SPDK_CONFIG_IPSEC_MB 00:06:54.964 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:54.964 #define SPDK_CONFIG_ISAL 1 00:06:54.964 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:54.964 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:54.964 #define SPDK_CONFIG_LIBDIR 00:06:54.964 #undef SPDK_CONFIG_LTO 00:06:54.964 #define SPDK_CONFIG_MAX_LCORES 128 00:06:54.964 #define SPDK_CONFIG_NVME_CUSE 1 00:06:54.964 #undef SPDK_CONFIG_OCF 00:06:54.964 #define SPDK_CONFIG_OCF_PATH 00:06:54.964 #define SPDK_CONFIG_OPENSSL_PATH 00:06:54.964 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:54.964 #define SPDK_CONFIG_PGO_DIR 00:06:54.964 #undef SPDK_CONFIG_PGO_USE 00:06:54.964 #define SPDK_CONFIG_PREFIX /usr/local 00:06:54.964 #undef SPDK_CONFIG_RAID5F 00:06:54.964 #undef SPDK_CONFIG_RBD 00:06:54.964 #define SPDK_CONFIG_RDMA 1 00:06:54.964 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:54.964 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:54.964 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:54.964 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:54.964 #define SPDK_CONFIG_SHARED 1 00:06:54.964 #undef SPDK_CONFIG_SMA 00:06:54.964 #define SPDK_CONFIG_TESTS 1 00:06:54.964 #undef SPDK_CONFIG_TSAN 00:06:54.964 #define SPDK_CONFIG_UBLK 1 00:06:54.964 #define SPDK_CONFIG_UBSAN 1 00:06:54.964 #undef SPDK_CONFIG_UNIT_TESTS 00:06:54.964 #undef SPDK_CONFIG_URING 00:06:54.964 #define SPDK_CONFIG_URING_PATH 00:06:54.964 #undef SPDK_CONFIG_URING_ZNS 00:06:54.964 #undef SPDK_CONFIG_USDT 00:06:54.964 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:54.964 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:54.964 #define SPDK_CONFIG_VFIO_USER 1 00:06:54.964 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:54.964 #define SPDK_CONFIG_VHOST 1 00:06:54.964 #define SPDK_CONFIG_VIRTIO 1 00:06:54.964 #undef SPDK_CONFIG_VTUNE 00:06:54.964 #define SPDK_CONFIG_VTUNE_DIR 00:06:54.964 #define SPDK_CONFIG_WERROR 1 00:06:54.964 #define SPDK_CONFIG_WPDK_DIR 00:06:54.964 #undef SPDK_CONFIG_XNVME 00:06:54.964 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.964 18:21:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:06:54.965 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 3760029 ]] 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 3760029 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.YLpfxA 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.YLpfxA/tests/target /tmp/spdk.YLpfxA 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:54.966 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953421824 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4331008000 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=190571900928 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=195974328320 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5402427392 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97983787008 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987162112 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=39185489920 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=39194865664 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9375744 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97986502656 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987166208 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=663552 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=19597426688 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=19597430784 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:54.967 * Looking for test storage... 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=190571900928 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=7617019904 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:54.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:54.967 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.307 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:55.307 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:55.307 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.307 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.307 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:55.307 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:55.307 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:55.308 18:21:40 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.308 18:21:40 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.308 18:21:40 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.308 18:21:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.308 18:21:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.308 18:21:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.308 18:21:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:55.308 18:21:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.308 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:55.308 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:55.308 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:55.308 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:55.308 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.308 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.308 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:55.308 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:55.308 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:55.308 18:21:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:55.308 18:21:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:55.308 18:21:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:55.308 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:55.308 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:55.308 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:55.308 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:55.308 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:55.308 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.308 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:55.308 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.308 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:55.308 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:55.308 18:21:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:55.308 18:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:00.573 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:00.573 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:00.573 Found net devices under 0000:86:00.0: cvl_0_0 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:00.573 Found net devices under 0000:86:00.1: cvl_0_1 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.573 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:00.574 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:00.574 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:00.574 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:00.574 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:00.574 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:00.574 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:00.574 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:00.574 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:00.574 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:00.574 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:00.574 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:00.574 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:00.574 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:00.574 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:00.574 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:00.574 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:00.574 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:00.832 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:00.832 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:00.832 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:00.832 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:00.832 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:00.832 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:00.832 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:00.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:00.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:07:00.832 00:07:00.832 --- 10.0.0.2 ping statistics --- 00:07:00.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.832 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:07:00.832 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:00.832 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:00.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:07:00.832 00:07:00.832 --- 10.0.0.1 ping statistics --- 00:07:00.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.832 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:07:00.832 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:00.832 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:00.832 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:00.832 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:00.832 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:00.832 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:00.832 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:00.832 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:00.832 18:21:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:00.832 18:21:46 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:00.832 18:21:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:00.832 18:21:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.832 18:21:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.091 ************************************ 00:07:01.091 START TEST nvmf_filesystem_no_in_capsule 00:07:01.091 ************************************ 00:07:01.091 18:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:01.091 18:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:01.091 18:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:01.091 18:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:01.091 18:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:01.091 18:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:01.092 18:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3763127 00:07:01.092 18:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3763127 00:07:01.092 18:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:01.092 18:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3763127 ']' 00:07:01.092 18:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.092 18:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.092 18:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.092 18:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.092 18:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:01.092 [2024-07-15 18:21:46.460102] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:01.092 [2024-07-15 18:21:46.460144] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.092 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.092 [2024-07-15 18:21:46.528067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:01.092 [2024-07-15 18:21:46.603239] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:01.092 [2024-07-15 18:21:46.603282] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:01.092 [2024-07-15 18:21:46.603288] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:01.092 [2024-07-15 18:21:46.603294] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:01.092 [2024-07-15 18:21:46.603298] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:01.092 [2024-07-15 18:21:46.603393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.092 [2024-07-15 18:21:46.603432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.092 [2024-07-15 18:21:46.603536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.092 [2024-07-15 18:21:46.603538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:02.028 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.028 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:02.028 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:02.028 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:02.028 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:02.028 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:02.028 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:02.028 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:02.028 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.028 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:02.028 [2024-07-15 18:21:47.308179] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:02.028 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.028 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:02.028 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.028 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:02.028 Malloc1 00:07:02.028 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.028 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:02.028 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.029 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:02.029 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.029 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:02.029 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.029 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:02.029 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.029 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:02.029 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.029 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:02.029 [2024-07-15 18:21:47.451448] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:02.029 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.029 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:02.029 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:02.029 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:02.029 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:02.029 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:02.029 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:02.029 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.029 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:02.029 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.029 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:02.029 { 00:07:02.029 "name": "Malloc1", 00:07:02.029 "aliases": [ 00:07:02.029 "1c9dc80a-5735-4621-a0af-94ec303899cb" 00:07:02.029 ], 00:07:02.029 "product_name": "Malloc disk", 00:07:02.029 "block_size": 512, 00:07:02.029 "num_blocks": 1048576, 00:07:02.029 "uuid": "1c9dc80a-5735-4621-a0af-94ec303899cb", 00:07:02.029 "assigned_rate_limits": { 00:07:02.029 "rw_ios_per_sec": 0, 00:07:02.029 "rw_mbytes_per_sec": 0, 00:07:02.029 "r_mbytes_per_sec": 0, 00:07:02.029 "w_mbytes_per_sec": 0 00:07:02.029 }, 00:07:02.029 "claimed": true, 00:07:02.029 "claim_type": "exclusive_write", 00:07:02.029 "zoned": false, 00:07:02.029 "supported_io_types": { 00:07:02.029 "read": true, 00:07:02.029 "write": true, 00:07:02.029 "unmap": true, 00:07:02.029 "flush": true, 00:07:02.029 "reset": true, 00:07:02.029 "nvme_admin": false, 00:07:02.029 "nvme_io": false, 00:07:02.029 "nvme_io_md": false, 00:07:02.029 "write_zeroes": true, 00:07:02.029 "zcopy": true, 00:07:02.029 "get_zone_info": false, 00:07:02.029 "zone_management": false, 00:07:02.029 "zone_append": false, 00:07:02.029 "compare": false, 00:07:02.029 "compare_and_write": false, 00:07:02.029 "abort": true, 00:07:02.029 "seek_hole": false, 00:07:02.029 "seek_data": false, 00:07:02.029 "copy": true, 00:07:02.029 "nvme_iov_md": false 00:07:02.029 }, 00:07:02.029 "memory_domains": [ 00:07:02.029 { 00:07:02.029 "dma_device_id": "system", 00:07:02.029 "dma_device_type": 1 00:07:02.029 }, 00:07:02.029 { 00:07:02.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:02.029 "dma_device_type": 2 00:07:02.029 } 00:07:02.029 ], 00:07:02.029 "driver_specific": {} 00:07:02.029 } 00:07:02.029 ]' 00:07:02.029 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:02.029 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:02.029 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:02.029 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:02.029 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:02.029 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:02.029 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:02.029 18:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:03.404 18:21:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:03.404 18:21:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:03.404 18:21:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:03.404 18:21:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:03.404 18:21:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:05.304 18:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:05.304 18:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:05.304 18:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:05.304 18:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:05.304 18:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:05.304 18:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:05.304 18:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:05.304 18:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:05.304 18:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:05.304 18:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:05.304 18:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:05.304 18:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:05.304 18:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:05.304 18:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:05.304 18:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:05.304 18:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:05.304 18:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:05.871 18:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:06.130 18:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:07.090 18:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:07.090 18:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:07.090 18:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:07.090 18:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.090 18:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.090 ************************************ 00:07:07.090 START TEST filesystem_ext4 00:07:07.090 ************************************ 00:07:07.090 18:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:07.090 18:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:07.090 18:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:07.090 18:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:07.090 18:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:07.090 18:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:07.090 18:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:07.090 18:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:07.090 18:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:07.090 18:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:07.090 18:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:07.090 mke2fs 1.46.5 (30-Dec-2021) 00:07:07.090 Discarding device blocks: 0/522240 done 00:07:07.090 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:07.090 Filesystem UUID: bb568204-b139-4377-83a9-270b5210fa8a 00:07:07.090 Superblock backups stored on blocks: 00:07:07.090 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:07.090 00:07:07.090 Allocating group tables: 0/64 done 00:07:07.090 Writing inode tables: 0/64 done 00:07:07.348 Creating journal (8192 blocks): done 00:07:07.348 Writing superblocks and filesystem accounting information: 0/64 done 00:07:07.348 00:07:07.348 18:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:07.348 18:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:07.348 18:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:07.608 18:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:07.608 18:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:07.608 18:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:07.608 18:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:07.608 18:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:07.608 18:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3763127 00:07:07.608 18:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:07.608 18:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:07.608 18:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:07.608 18:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:07.608 00:07:07.608 real 0m0.443s 00:07:07.608 user 0m0.025s 00:07:07.608 sys 0m0.063s 00:07:07.608 18:21:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.608 18:21:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:07.608 ************************************ 00:07:07.608 END TEST filesystem_ext4 00:07:07.608 ************************************ 00:07:07.608 18:21:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:07.608 18:21:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:07.608 18:21:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:07.608 18:21:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.608 18:21:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.608 ************************************ 00:07:07.608 START TEST filesystem_btrfs 00:07:07.608 ************************************ 00:07:07.608 18:21:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:07.608 18:21:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:07.608 18:21:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:07.608 18:21:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:07.608 18:21:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:07.608 18:21:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:07.608 18:21:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:07.608 18:21:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:07.608 18:21:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:07.608 18:21:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:07.608 18:21:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:08.182 btrfs-progs v6.6.2 00:07:08.182 See https://btrfs.readthedocs.io for more information. 00:07:08.182 00:07:08.182 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:08.182 NOTE: several default settings have changed in version 5.15, please make sure 00:07:08.182 this does not affect your deployments: 00:07:08.182 - DUP for metadata (-m dup) 00:07:08.182 - enabled no-holes (-O no-holes) 00:07:08.182 - enabled free-space-tree (-R free-space-tree) 00:07:08.182 00:07:08.182 Label: (null) 00:07:08.182 UUID: 13bd1d2b-dea7-42d5-aac3-2bd3ecaebdc2 00:07:08.182 Node size: 16384 00:07:08.182 Sector size: 4096 00:07:08.182 Filesystem size: 510.00MiB 00:07:08.182 Block group profiles: 00:07:08.182 Data: single 8.00MiB 00:07:08.182 Metadata: DUP 32.00MiB 00:07:08.182 System: DUP 8.00MiB 00:07:08.182 SSD detected: yes 00:07:08.182 Zoned device: no 00:07:08.182 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:08.182 Runtime features: free-space-tree 00:07:08.182 Checksum: crc32c 00:07:08.182 Number of devices: 1 00:07:08.182 Devices: 00:07:08.182 ID SIZE PATH 00:07:08.182 1 510.00MiB /dev/nvme0n1p1 00:07:08.182 00:07:08.182 18:21:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:08.182 18:21:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:08.749 18:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:08.749 18:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:08.749 18:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:08.749 18:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:08.749 18:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:08.749 18:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:08.749 18:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3763127 00:07:08.749 18:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:08.749 18:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:08.749 18:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:08.749 18:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:08.749 00:07:08.749 real 0m1.125s 00:07:08.749 user 0m0.030s 00:07:08.749 sys 0m0.119s 00:07:08.749 18:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.749 18:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:08.749 ************************************ 00:07:08.749 END TEST filesystem_btrfs 00:07:08.749 ************************************ 00:07:08.749 18:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:08.749 18:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:08.749 18:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:08.749 18:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.749 18:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.749 ************************************ 00:07:08.749 START TEST filesystem_xfs 00:07:08.749 ************************************ 00:07:08.749 18:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:08.749 18:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:08.749 18:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:08.749 18:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:08.749 18:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:08.749 18:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:08.749 18:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:08.749 18:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:08.749 18:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:08.749 18:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:08.749 18:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:09.008 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:09.008 = sectsz=512 attr=2, projid32bit=1 00:07:09.008 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:09.008 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:09.008 data = bsize=4096 blocks=130560, imaxpct=25 00:07:09.008 = sunit=0 swidth=0 blks 00:07:09.008 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:09.008 log =internal log bsize=4096 blocks=16384, version=2 00:07:09.008 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:09.008 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:09.575 Discarding blocks...Done. 00:07:09.575 18:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:09.575 18:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:12.107 18:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:12.366 18:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:12.366 18:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:12.366 18:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:12.366 18:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:12.366 18:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:12.366 18:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3763127 00:07:12.366 18:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:12.366 18:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:12.366 18:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:12.366 18:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:12.366 00:07:12.366 real 0m3.459s 00:07:12.366 user 0m0.032s 00:07:12.366 sys 0m0.063s 00:07:12.366 18:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.366 18:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:12.366 ************************************ 00:07:12.366 END TEST filesystem_xfs 00:07:12.366 ************************************ 00:07:12.366 18:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:12.366 18:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:12.624 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:12.624 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:12.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:12.883 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:12.883 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:12.883 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:12.883 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:12.883 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:12.883 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:12.883 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:12.883 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:12.883 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.883 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:12.883 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.883 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:12.883 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3763127 00:07:12.883 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3763127 ']' 00:07:12.883 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3763127 00:07:12.883 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:12.883 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:12.883 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3763127 00:07:12.883 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:12.883 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:12.883 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3763127' 00:07:12.883 killing process with pid 3763127 00:07:12.883 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 3763127 00:07:12.883 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 3763127 00:07:13.142 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:13.142 00:07:13.142 real 0m12.217s 00:07:13.142 user 0m47.984s 00:07:13.142 sys 0m1.197s 00:07:13.142 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.142 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.142 ************************************ 00:07:13.142 END TEST nvmf_filesystem_no_in_capsule 00:07:13.142 ************************************ 00:07:13.142 18:21:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:13.142 18:21:58 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:13.142 18:21:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:13.142 18:21:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.142 18:21:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:13.142 ************************************ 00:07:13.142 START TEST nvmf_filesystem_in_capsule 00:07:13.142 ************************************ 00:07:13.142 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:13.142 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:13.142 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:13.142 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:13.142 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:13.142 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.401 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3765357 00:07:13.401 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3765357 00:07:13.401 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:13.401 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3765357 ']' 00:07:13.401 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.401 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.401 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.401 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.401 18:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.401 [2024-07-15 18:21:58.745726] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:13.401 [2024-07-15 18:21:58.745762] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.401 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.401 [2024-07-15 18:21:58.800094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:13.401 [2024-07-15 18:21:58.877156] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:13.401 [2024-07-15 18:21:58.877194] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:13.401 [2024-07-15 18:21:58.877203] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:13.401 [2024-07-15 18:21:58.877210] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:13.401 [2024-07-15 18:21:58.877215] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:13.401 [2024-07-15 18:21:58.877259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.401 [2024-07-15 18:21:58.877412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.401 [2024-07-15 18:21:58.881354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.401 [2024-07-15 18:21:58.881356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.338 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.338 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:14.338 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:14.338 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:14.338 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.338 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:14.338 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:14.338 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:14.338 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.338 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.338 [2024-07-15 18:21:59.621403] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.338 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.338 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:14.338 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.338 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.338 Malloc1 00:07:14.338 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.338 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:14.338 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.338 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.338 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.338 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:14.338 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.338 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.338 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.339 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:14.339 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.339 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.339 [2024-07-15 18:21:59.764826] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:14.339 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.339 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:14.339 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:14.339 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:14.339 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:14.339 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:14.339 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:14.339 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.339 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.339 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.339 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:14.339 { 00:07:14.339 "name": "Malloc1", 00:07:14.339 "aliases": [ 00:07:14.339 "dbefeffb-7ec2-498d-b253-46ef750f99d6" 00:07:14.339 ], 00:07:14.339 "product_name": "Malloc disk", 00:07:14.339 "block_size": 512, 00:07:14.339 "num_blocks": 1048576, 00:07:14.339 "uuid": "dbefeffb-7ec2-498d-b253-46ef750f99d6", 00:07:14.339 "assigned_rate_limits": { 00:07:14.339 "rw_ios_per_sec": 0, 00:07:14.339 "rw_mbytes_per_sec": 0, 00:07:14.339 "r_mbytes_per_sec": 0, 00:07:14.339 "w_mbytes_per_sec": 0 00:07:14.339 }, 00:07:14.339 "claimed": true, 00:07:14.339 "claim_type": "exclusive_write", 00:07:14.339 "zoned": false, 00:07:14.339 "supported_io_types": { 00:07:14.339 "read": true, 00:07:14.339 "write": true, 00:07:14.339 "unmap": true, 00:07:14.339 "flush": true, 00:07:14.339 "reset": true, 00:07:14.339 "nvme_admin": false, 00:07:14.339 "nvme_io": false, 00:07:14.339 "nvme_io_md": false, 00:07:14.339 "write_zeroes": true, 00:07:14.339 "zcopy": true, 00:07:14.339 "get_zone_info": false, 00:07:14.339 "zone_management": false, 00:07:14.339 "zone_append": false, 00:07:14.339 "compare": false, 00:07:14.339 "compare_and_write": false, 00:07:14.339 "abort": true, 00:07:14.339 "seek_hole": false, 00:07:14.339 "seek_data": false, 00:07:14.339 "copy": true, 00:07:14.339 "nvme_iov_md": false 00:07:14.339 }, 00:07:14.339 "memory_domains": [ 00:07:14.339 { 00:07:14.339 "dma_device_id": "system", 00:07:14.339 "dma_device_type": 1 00:07:14.339 }, 00:07:14.339 { 00:07:14.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.339 "dma_device_type": 2 00:07:14.339 } 00:07:14.339 ], 00:07:14.339 "driver_specific": {} 00:07:14.339 } 00:07:14.339 ]' 00:07:14.339 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:14.339 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:14.339 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:14.339 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:14.339 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:14.339 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:14.339 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:14.339 18:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:15.715 18:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:15.715 18:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:15.715 18:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:15.715 18:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:15.715 18:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:17.613 18:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:17.613 18:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:17.613 18:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:17.613 18:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:17.613 18:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:17.613 18:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:17.613 18:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:17.613 18:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:17.613 18:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:17.613 18:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:17.613 18:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:17.613 18:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:17.613 18:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:17.613 18:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:17.613 18:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:17.613 18:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:17.613 18:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:17.871 18:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:18.804 18:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:19.740 18:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:19.740 18:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:19.740 18:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:19.740 18:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.740 18:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.740 ************************************ 00:07:19.740 START TEST filesystem_in_capsule_ext4 00:07:19.740 ************************************ 00:07:19.740 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:19.740 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:19.740 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:19.740 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:19.740 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:19.740 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:19.740 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:19.740 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:19.740 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:19.740 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:19.740 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:19.740 mke2fs 1.46.5 (30-Dec-2021) 00:07:19.740 Discarding device blocks: 0/522240 done 00:07:19.740 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:19.740 Filesystem UUID: 0c97317e-fc7b-47e2-9ff7-2ba638a5383b 00:07:19.740 Superblock backups stored on blocks: 00:07:19.740 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:19.740 00:07:19.740 Allocating group tables: 0/64 done 00:07:19.740 Writing inode tables: 0/64 done 00:07:19.999 Creating journal (8192 blocks): done 00:07:20.024 Writing superblocks and filesystem accounting information: 0/64 done 00:07:20.024 00:07:20.024 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:20.024 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:20.283 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:20.283 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:20.283 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:20.283 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:20.283 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:20.283 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:20.283 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3765357 00:07:20.283 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:20.283 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:20.283 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:20.283 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:20.283 00:07:20.283 real 0m0.726s 00:07:20.283 user 0m0.027s 00:07:20.283 sys 0m0.060s 00:07:20.283 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.283 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:20.283 ************************************ 00:07:20.283 END TEST filesystem_in_capsule_ext4 00:07:20.283 ************************************ 00:07:20.283 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:20.283 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:20.283 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:20.283 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.283 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.283 ************************************ 00:07:20.283 START TEST filesystem_in_capsule_btrfs 00:07:20.283 ************************************ 00:07:20.283 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:20.283 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:20.283 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:20.283 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:20.283 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:20.283 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:20.283 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:20.283 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:20.283 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:20.283 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:20.283 18:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:20.542 btrfs-progs v6.6.2 00:07:20.542 See https://btrfs.readthedocs.io for more information. 00:07:20.542 00:07:20.542 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:20.542 NOTE: several default settings have changed in version 5.15, please make sure 00:07:20.542 this does not affect your deployments: 00:07:20.542 - DUP for metadata (-m dup) 00:07:20.542 - enabled no-holes (-O no-holes) 00:07:20.542 - enabled free-space-tree (-R free-space-tree) 00:07:20.542 00:07:20.542 Label: (null) 00:07:20.542 UUID: d9aa5479-f9fe-45bc-ad2b-edb4a6c79366 00:07:20.542 Node size: 16384 00:07:20.542 Sector size: 4096 00:07:20.542 Filesystem size: 510.00MiB 00:07:20.542 Block group profiles: 00:07:20.542 Data: single 8.00MiB 00:07:20.542 Metadata: DUP 32.00MiB 00:07:20.542 System: DUP 8.00MiB 00:07:20.542 SSD detected: yes 00:07:20.542 Zoned device: no 00:07:20.542 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:20.542 Runtime features: free-space-tree 00:07:20.542 Checksum: crc32c 00:07:20.542 Number of devices: 1 00:07:20.542 Devices: 00:07:20.542 ID SIZE PATH 00:07:20.542 1 510.00MiB /dev/nvme0n1p1 00:07:20.542 00:07:20.542 18:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:20.542 18:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:21.490 18:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:21.490 18:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:21.490 18:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:21.490 18:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:21.490 18:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:21.490 18:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:21.490 18:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3765357 00:07:21.490 18:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:21.490 18:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:21.490 18:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:21.490 18:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:21.490 00:07:21.490 real 0m0.923s 00:07:21.490 user 0m0.022s 00:07:21.490 sys 0m0.127s 00:07:21.490 18:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.490 18:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:21.490 ************************************ 00:07:21.490 END TEST filesystem_in_capsule_btrfs 00:07:21.490 ************************************ 00:07:21.490 18:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:21.490 18:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:21.490 18:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:21.490 18:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.490 18:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.490 ************************************ 00:07:21.490 START TEST filesystem_in_capsule_xfs 00:07:21.490 ************************************ 00:07:21.490 18:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:21.490 18:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:21.490 18:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:21.490 18:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:21.490 18:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:21.490 18:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:21.490 18:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:21.490 18:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:21.490 18:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:21.490 18:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:21.490 18:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:21.490 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:21.490 = sectsz=512 attr=2, projid32bit=1 00:07:21.490 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:21.490 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:21.490 data = bsize=4096 blocks=130560, imaxpct=25 00:07:21.490 = sunit=0 swidth=0 blks 00:07:21.490 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:21.490 log =internal log bsize=4096 blocks=16384, version=2 00:07:21.490 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:21.490 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:22.424 Discarding blocks...Done. 00:07:22.424 18:22:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:22.424 18:22:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:24.953 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:24.953 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:24.953 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:24.953 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:24.953 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:24.953 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:24.953 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3765357 00:07:24.953 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:24.953 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:24.953 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:24.954 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:24.954 00:07:24.954 real 0m3.457s 00:07:24.954 user 0m0.027s 00:07:24.954 sys 0m0.068s 00:07:24.954 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.954 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:24.954 ************************************ 00:07:24.954 END TEST filesystem_in_capsule_xfs 00:07:24.954 ************************************ 00:07:24.954 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:24.954 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:24.954 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:24.954 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:24.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:24.954 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:24.954 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:24.954 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:24.954 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:24.954 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:24.954 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:24.954 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:24.954 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:24.954 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.954 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.954 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.954 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:24.954 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3765357 00:07:24.954 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3765357 ']' 00:07:24.954 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3765357 00:07:24.954 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:25.212 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:25.212 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3765357 00:07:25.212 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:25.212 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:25.212 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3765357' 00:07:25.212 killing process with pid 3765357 00:07:25.212 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 3765357 00:07:25.212 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 3765357 00:07:25.471 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:25.471 00:07:25.471 real 0m12.201s 00:07:25.471 user 0m47.954s 00:07:25.471 sys 0m1.217s 00:07:25.471 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.471 18:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.471 ************************************ 00:07:25.471 END TEST nvmf_filesystem_in_capsule 00:07:25.471 ************************************ 00:07:25.471 18:22:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:25.471 18:22:10 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:25.471 18:22:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:25.471 18:22:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:25.471 18:22:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:25.471 18:22:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:25.471 18:22:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:25.471 18:22:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:25.471 rmmod nvme_tcp 00:07:25.471 rmmod nvme_fabrics 00:07:25.471 rmmod nvme_keyring 00:07:25.471 18:22:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:25.471 18:22:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:25.471 18:22:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:25.471 18:22:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:25.471 18:22:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:25.471 18:22:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:25.471 18:22:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:25.471 18:22:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:25.471 18:22:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:25.471 18:22:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.471 18:22:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:25.471 18:22:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.056 18:22:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:28.056 00:07:28.056 real 0m32.799s 00:07:28.056 user 1m37.732s 00:07:28.056 sys 0m6.996s 00:07:28.056 18:22:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.056 18:22:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.056 ************************************ 00:07:28.056 END TEST nvmf_filesystem 00:07:28.056 ************************************ 00:07:28.056 18:22:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:28.056 18:22:13 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:28.056 18:22:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:28.056 18:22:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.056 18:22:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:28.056 ************************************ 00:07:28.056 START TEST nvmf_target_discovery 00:07:28.056 ************************************ 00:07:28.056 18:22:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:28.056 * Looking for test storage... 00:07:28.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:28.056 18:22:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:28.056 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:28.056 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:28.056 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:28.056 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:28.056 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:28.056 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:28.057 18:22:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:33.332 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:33.332 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:33.332 Found net devices under 0000:86:00.0: cvl_0_0 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:33.332 Found net devices under 0000:86:00.1: cvl_0_1 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:33.332 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:33.333 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:33.333 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:33.333 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:33.333 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:33.333 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:33.333 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:33.333 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:33.333 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:33.333 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:33.591 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:33.592 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:33.592 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:33.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:33.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:07:33.592 00:07:33.592 --- 10.0.0.2 ping statistics --- 00:07:33.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.592 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:07:33.592 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:33.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:33.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:07:33.592 00:07:33.592 --- 10.0.0.1 ping statistics --- 00:07:33.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.592 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:07:33.592 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:33.592 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:33.592 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:33.592 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:33.592 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:33.592 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:33.592 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:33.592 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:33.592 18:22:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:33.592 18:22:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:33.592 18:22:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:33.592 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:33.592 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:33.592 18:22:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3771158 00:07:33.592 18:22:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3771158 00:07:33.592 18:22:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:33.592 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 3771158 ']' 00:07:33.592 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.592 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:33.592 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.592 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:33.592 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:33.592 [2024-07-15 18:22:19.069730] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:33.592 [2024-07-15 18:22:19.069777] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.592 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.592 [2024-07-15 18:22:19.141797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:33.851 [2024-07-15 18:22:19.221233] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:33.851 [2024-07-15 18:22:19.221267] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:33.851 [2024-07-15 18:22:19.221274] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:33.851 [2024-07-15 18:22:19.221279] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:33.851 [2024-07-15 18:22:19.221284] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:33.851 [2024-07-15 18:22:19.221394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.851 [2024-07-15 18:22:19.221437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.851 [2024-07-15 18:22:19.221555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.851 [2024-07-15 18:22:19.221557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.419 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:34.419 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:34.419 18:22:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:34.419 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:34.419 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.419 18:22:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:34.419 18:22:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:34.419 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.419 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.419 [2024-07-15 18:22:19.923097] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:34.419 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.419 18:22:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:34.419 18:22:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:34.419 18:22:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:34.419 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.419 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.419 Null1 00:07:34.419 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.419 18:22:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:34.419 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.419 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.419 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.419 18:22:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:34.419 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.419 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.419 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.419 18:22:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:34.419 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.419 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.419 [2024-07-15 18:22:19.968459] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.419 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.419 18:22:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:34.419 18:22:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:34.419 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.419 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.678 Null2 00:07:34.678 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.678 18:22:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:34.678 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.678 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.678 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.678 18:22:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:34.678 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.678 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.678 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.678 18:22:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:34.678 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.678 18:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.678 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.678 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:34.678 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:34.678 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.678 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.678 Null3 00:07:34.678 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.678 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:34.678 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.678 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.678 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.678 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:34.678 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.678 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.678 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.678 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:34.678 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.678 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.678 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.678 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:34.678 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:34.678 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.678 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.678 Null4 00:07:34.678 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.678 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:34.678 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.678 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.678 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.678 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:34.678 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.678 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.678 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.678 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:34.679 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.679 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.679 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.679 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:34.679 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.679 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.679 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.679 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:34.679 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.679 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.679 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.679 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:07:34.937 00:07:34.937 Discovery Log Number of Records 6, Generation counter 6 00:07:34.937 =====Discovery Log Entry 0====== 00:07:34.937 trtype: tcp 00:07:34.937 adrfam: ipv4 00:07:34.937 subtype: current discovery subsystem 00:07:34.937 treq: not required 00:07:34.937 portid: 0 00:07:34.937 trsvcid: 4420 00:07:34.937 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:34.937 traddr: 10.0.0.2 00:07:34.937 eflags: explicit discovery connections, duplicate discovery information 00:07:34.937 sectype: none 00:07:34.937 =====Discovery Log Entry 1====== 00:07:34.937 trtype: tcp 00:07:34.937 adrfam: ipv4 00:07:34.937 subtype: nvme subsystem 00:07:34.937 treq: not required 00:07:34.937 portid: 0 00:07:34.937 trsvcid: 4420 00:07:34.937 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:34.937 traddr: 10.0.0.2 00:07:34.937 eflags: none 00:07:34.937 sectype: none 00:07:34.937 =====Discovery Log Entry 2====== 00:07:34.937 trtype: tcp 00:07:34.937 adrfam: ipv4 00:07:34.937 subtype: nvme subsystem 00:07:34.937 treq: not required 00:07:34.937 portid: 0 00:07:34.937 trsvcid: 4420 00:07:34.937 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:34.937 traddr: 10.0.0.2 00:07:34.937 eflags: none 00:07:34.937 sectype: none 00:07:34.937 =====Discovery Log Entry 3====== 00:07:34.937 trtype: tcp 00:07:34.937 adrfam: ipv4 00:07:34.937 subtype: nvme subsystem 00:07:34.937 treq: not required 00:07:34.937 portid: 0 00:07:34.937 trsvcid: 4420 00:07:34.937 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:34.937 traddr: 10.0.0.2 00:07:34.937 eflags: none 00:07:34.937 sectype: none 00:07:34.937 =====Discovery Log Entry 4====== 00:07:34.937 trtype: tcp 00:07:34.937 adrfam: ipv4 00:07:34.937 subtype: nvme subsystem 00:07:34.937 treq: not required 00:07:34.937 portid: 0 00:07:34.937 trsvcid: 4420 00:07:34.937 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:34.937 traddr: 10.0.0.2 00:07:34.937 eflags: none 00:07:34.937 sectype: none 00:07:34.937 =====Discovery Log Entry 5====== 00:07:34.937 trtype: tcp 00:07:34.937 adrfam: ipv4 00:07:34.937 subtype: discovery subsystem referral 00:07:34.937 treq: not required 00:07:34.937 portid: 0 00:07:34.937 trsvcid: 4430 00:07:34.937 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:34.937 traddr: 10.0.0.2 00:07:34.937 eflags: none 00:07:34.937 sectype: none 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:34.937 Perform nvmf subsystem discovery via RPC 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.937 [ 00:07:34.937 { 00:07:34.937 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:34.937 "subtype": "Discovery", 00:07:34.937 "listen_addresses": [ 00:07:34.937 { 00:07:34.937 "trtype": "TCP", 00:07:34.937 "adrfam": "IPv4", 00:07:34.937 "traddr": "10.0.0.2", 00:07:34.937 "trsvcid": "4420" 00:07:34.937 } 00:07:34.937 ], 00:07:34.937 "allow_any_host": true, 00:07:34.937 "hosts": [] 00:07:34.937 }, 00:07:34.937 { 00:07:34.937 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:34.937 "subtype": "NVMe", 00:07:34.937 "listen_addresses": [ 00:07:34.937 { 00:07:34.937 "trtype": "TCP", 00:07:34.937 "adrfam": "IPv4", 00:07:34.937 "traddr": "10.0.0.2", 00:07:34.937 "trsvcid": "4420" 00:07:34.937 } 00:07:34.937 ], 00:07:34.937 "allow_any_host": true, 00:07:34.937 "hosts": [], 00:07:34.937 "serial_number": "SPDK00000000000001", 00:07:34.937 "model_number": "SPDK bdev Controller", 00:07:34.937 "max_namespaces": 32, 00:07:34.937 "min_cntlid": 1, 00:07:34.937 "max_cntlid": 65519, 00:07:34.937 "namespaces": [ 00:07:34.937 { 00:07:34.937 "nsid": 1, 00:07:34.937 "bdev_name": "Null1", 00:07:34.937 "name": "Null1", 00:07:34.937 "nguid": "92BFC55864E54F6699FE7060D5622C53", 00:07:34.937 "uuid": "92bfc558-64e5-4f66-99fe-7060d5622c53" 00:07:34.937 } 00:07:34.937 ] 00:07:34.937 }, 00:07:34.937 { 00:07:34.937 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:34.937 "subtype": "NVMe", 00:07:34.937 "listen_addresses": [ 00:07:34.937 { 00:07:34.937 "trtype": "TCP", 00:07:34.937 "adrfam": "IPv4", 00:07:34.937 "traddr": "10.0.0.2", 00:07:34.937 "trsvcid": "4420" 00:07:34.937 } 00:07:34.937 ], 00:07:34.937 "allow_any_host": true, 00:07:34.937 "hosts": [], 00:07:34.937 "serial_number": "SPDK00000000000002", 00:07:34.937 "model_number": "SPDK bdev Controller", 00:07:34.937 "max_namespaces": 32, 00:07:34.937 "min_cntlid": 1, 00:07:34.937 "max_cntlid": 65519, 00:07:34.937 "namespaces": [ 00:07:34.937 { 00:07:34.937 "nsid": 1, 00:07:34.937 "bdev_name": "Null2", 00:07:34.937 "name": "Null2", 00:07:34.937 "nguid": "2815782CD43449E4AA639A197D3B3A56", 00:07:34.937 "uuid": "2815782c-d434-49e4-aa63-9a197d3b3a56" 00:07:34.937 } 00:07:34.937 ] 00:07:34.937 }, 00:07:34.937 { 00:07:34.937 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:34.937 "subtype": "NVMe", 00:07:34.937 "listen_addresses": [ 00:07:34.937 { 00:07:34.937 "trtype": "TCP", 00:07:34.937 "adrfam": "IPv4", 00:07:34.937 "traddr": "10.0.0.2", 00:07:34.937 "trsvcid": "4420" 00:07:34.937 } 00:07:34.937 ], 00:07:34.937 "allow_any_host": true, 00:07:34.937 "hosts": [], 00:07:34.937 "serial_number": "SPDK00000000000003", 00:07:34.937 "model_number": "SPDK bdev Controller", 00:07:34.937 "max_namespaces": 32, 00:07:34.937 "min_cntlid": 1, 00:07:34.937 "max_cntlid": 65519, 00:07:34.937 "namespaces": [ 00:07:34.937 { 00:07:34.937 "nsid": 1, 00:07:34.937 "bdev_name": "Null3", 00:07:34.937 "name": "Null3", 00:07:34.937 "nguid": "BEE232DCEE2240B2AEB98FB8FAA041F2", 00:07:34.937 "uuid": "bee232dc-ee22-40b2-aeb9-8fb8faa041f2" 00:07:34.937 } 00:07:34.937 ] 00:07:34.937 }, 00:07:34.937 { 00:07:34.937 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:34.937 "subtype": "NVMe", 00:07:34.937 "listen_addresses": [ 00:07:34.937 { 00:07:34.937 "trtype": "TCP", 00:07:34.937 "adrfam": "IPv4", 00:07:34.937 "traddr": "10.0.0.2", 00:07:34.937 "trsvcid": "4420" 00:07:34.937 } 00:07:34.937 ], 00:07:34.937 "allow_any_host": true, 00:07:34.937 "hosts": [], 00:07:34.937 "serial_number": "SPDK00000000000004", 00:07:34.937 "model_number": "SPDK bdev Controller", 00:07:34.937 "max_namespaces": 32, 00:07:34.937 "min_cntlid": 1, 00:07:34.937 "max_cntlid": 65519, 00:07:34.937 "namespaces": [ 00:07:34.937 { 00:07:34.937 "nsid": 1, 00:07:34.937 "bdev_name": "Null4", 00:07:34.937 "name": "Null4", 00:07:34.937 "nguid": "1FD4F28BFEA4400D848547A86566E92C", 00:07:34.937 "uuid": "1fd4f28b-fea4-400d-8485-47a86566e92c" 00:07:34.937 } 00:07:34.937 ] 00:07:34.937 } 00:07:34.937 ] 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.937 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:34.938 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:34.938 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.938 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.938 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.938 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:34.938 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.938 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.938 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.938 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:34.938 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.938 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.938 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.938 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:34.938 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:34.938 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.938 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.938 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.938 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:34.938 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:34.938 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:34.938 18:22:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:34.938 18:22:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:34.938 18:22:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:34.938 18:22:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:34.938 18:22:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:34.938 18:22:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:34.938 18:22:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:34.938 rmmod nvme_tcp 00:07:34.938 rmmod nvme_fabrics 00:07:34.938 rmmod nvme_keyring 00:07:35.196 18:22:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:35.196 18:22:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:35.196 18:22:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:35.196 18:22:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3771158 ']' 00:07:35.196 18:22:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3771158 00:07:35.196 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 3771158 ']' 00:07:35.196 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 3771158 00:07:35.196 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:35.196 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:35.196 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3771158 00:07:35.196 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:35.196 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:35.196 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3771158' 00:07:35.196 killing process with pid 3771158 00:07:35.196 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 3771158 00:07:35.196 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 3771158 00:07:35.196 18:22:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:35.196 18:22:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:35.196 18:22:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:35.196 18:22:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:35.196 18:22:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:35.196 18:22:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.196 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:35.196 18:22:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.731 18:22:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:37.731 00:07:37.731 real 0m9.668s 00:07:37.731 user 0m7.802s 00:07:37.731 sys 0m4.698s 00:07:37.731 18:22:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.731 18:22:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.731 ************************************ 00:07:37.731 END TEST nvmf_target_discovery 00:07:37.731 ************************************ 00:07:37.731 18:22:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:37.731 18:22:22 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:37.731 18:22:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:37.731 18:22:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.731 18:22:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:37.731 ************************************ 00:07:37.731 START TEST nvmf_referrals 00:07:37.731 ************************************ 00:07:37.731 18:22:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:37.731 * Looking for test storage... 00:07:37.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.731 18:22:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:37.731 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:37.731 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.731 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.731 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.731 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:37.732 18:22:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:37.732 18:22:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:43.000 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:43.000 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:43.000 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:43.000 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:43.000 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:43.000 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:43.000 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:43.000 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:43.000 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:43.000 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:43.000 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:43.000 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:43.000 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:43.000 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:43.000 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:43.000 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:43.000 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:43.000 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:43.000 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:43.000 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:43.000 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:43.000 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:43.001 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:43.001 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:43.001 Found net devices under 0000:86:00.0: cvl_0_0 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:43.001 Found net devices under 0000:86:00.1: cvl_0_1 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:43.001 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:43.260 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:43.260 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:43.260 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:43.260 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:43.260 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:43.260 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:43.260 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:43.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:43.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:07:43.260 00:07:43.260 --- 10.0.0.2 ping statistics --- 00:07:43.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.260 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:07:43.260 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:43.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:43.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:07:43.260 00:07:43.260 --- 10.0.0.1 ping statistics --- 00:07:43.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.260 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:07:43.260 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:43.260 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:43.260 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:43.260 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:43.260 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:43.260 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:43.260 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:43.260 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:43.260 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:43.260 18:22:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:43.260 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:43.260 18:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:43.260 18:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:43.260 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3774908 00:07:43.260 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3774908 00:07:43.260 18:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:43.260 18:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 3774908 ']' 00:07:43.260 18:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.260 18:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:43.260 18:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.260 18:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:43.260 18:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:43.260 [2024-07-15 18:22:28.806092] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:43.260 [2024-07-15 18:22:28.806137] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.519 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.519 [2024-07-15 18:22:28.874167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:43.519 [2024-07-15 18:22:28.953881] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:43.519 [2024-07-15 18:22:28.953918] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:43.519 [2024-07-15 18:22:28.953924] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:43.519 [2024-07-15 18:22:28.953930] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:43.519 [2024-07-15 18:22:28.953935] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:43.519 [2024-07-15 18:22:28.953989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.519 [2024-07-15 18:22:28.954098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.519 [2024-07-15 18:22:28.954226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.519 [2024-07-15 18:22:28.954227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.086 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:44.086 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:07:44.086 18:22:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:44.086 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:44.086 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.086 18:22:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.086 18:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:44.086 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.086 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.086 [2024-07-15 18:22:29.636148] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.086 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.086 18:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:44.086 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.086 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.344 [2024-07-15 18:22:29.649411] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:44.344 18:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:44.603 18:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:44.603 18:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:44.603 18:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:44.603 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.603 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.603 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.603 18:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:44.603 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.603 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.603 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.603 18:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:44.603 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.603 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.603 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.603 18:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:44.603 18:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:44.603 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.603 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.603 18:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.603 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:44.603 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:44.603 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:44.603 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:44.603 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:44.603 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:44.603 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:44.603 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:44.603 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:44.603 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:44.603 18:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.603 18:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.603 18:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.603 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:44.603 18:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.603 18:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.603 18:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.603 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:44.603 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:44.862 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:44.862 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:44.862 18:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.862 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:44.862 18:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.862 18:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.862 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:44.862 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:44.862 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:44.862 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:44.862 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:44.862 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:44.862 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:44.862 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:44.862 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:44.862 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:44.862 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:44.862 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:44.862 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:44.862 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:44.862 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:45.121 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:45.121 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:45.121 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:45.121 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:45.121 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:45.121 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:45.121 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:45.121 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:45.121 18:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.121 18:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:45.121 18:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.380 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:45.380 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:45.380 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:45.380 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:45.380 18:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.380 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:45.380 18:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:45.380 18:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.380 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:45.380 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:45.380 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:45.380 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:45.380 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:45.380 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:45.380 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:45.380 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:45.380 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:45.380 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:45.380 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:45.380 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:45.380 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:45.380 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:45.380 18:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:45.638 18:22:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:45.638 18:22:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:45.638 18:22:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:45.638 18:22:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:45.638 18:22:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:45.638 18:22:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:45.638 18:22:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:45.638 18:22:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:45.638 18:22:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.638 18:22:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:45.638 18:22:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.638 18:22:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:45.638 18:22:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:45.638 18:22:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.638 18:22:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:45.638 18:22:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.897 18:22:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:45.897 18:22:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:45.897 18:22:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:45.897 18:22:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:45.897 18:22:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:45.897 18:22:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:45.897 18:22:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:45.897 18:22:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:45.897 18:22:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:45.897 18:22:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:45.897 18:22:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:45.897 18:22:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:45.897 18:22:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:45.897 18:22:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:45.897 18:22:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:45.897 18:22:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:45.897 18:22:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:45.897 rmmod nvme_tcp 00:07:45.897 rmmod nvme_fabrics 00:07:45.897 rmmod nvme_keyring 00:07:45.897 18:22:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:45.897 18:22:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:45.897 18:22:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:45.897 18:22:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3774908 ']' 00:07:45.897 18:22:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3774908 00:07:45.897 18:22:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 3774908 ']' 00:07:45.897 18:22:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 3774908 00:07:45.897 18:22:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:45.897 18:22:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:45.897 18:22:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3774908 00:07:45.897 18:22:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:45.897 18:22:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:45.897 18:22:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3774908' 00:07:45.897 killing process with pid 3774908 00:07:45.897 18:22:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 3774908 00:07:45.897 18:22:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 3774908 00:07:46.157 18:22:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:46.157 18:22:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:46.157 18:22:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:46.157 18:22:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:46.157 18:22:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:46.157 18:22:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.157 18:22:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:46.157 18:22:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.693 18:22:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:48.693 00:07:48.693 real 0m10.783s 00:07:48.693 user 0m12.893s 00:07:48.693 sys 0m5.026s 00:07:48.693 18:22:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.693 18:22:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:48.693 ************************************ 00:07:48.693 END TEST nvmf_referrals 00:07:48.693 ************************************ 00:07:48.693 18:22:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:48.693 18:22:33 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:48.693 18:22:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:48.693 18:22:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.693 18:22:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:48.693 ************************************ 00:07:48.693 START TEST nvmf_connect_disconnect 00:07:48.693 ************************************ 00:07:48.693 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:48.693 * Looking for test storage... 00:07:48.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.693 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:48.693 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:48.693 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:48.693 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:48.693 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:48.693 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:48.693 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:48.693 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:48.693 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:48.693 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:48.693 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:48.693 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:48.693 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:48.693 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:48.693 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:48.693 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:48.693 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:48.693 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:48.693 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:48.693 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.693 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.693 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.693 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.693 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.693 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.693 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:48.694 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.694 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:48.694 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:48.694 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:48.694 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:48.694 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:48.694 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:48.694 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:48.694 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:48.694 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:48.694 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:48.694 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:48.694 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:48.694 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:48.694 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:48.694 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:48.694 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:48.694 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:48.694 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.694 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.694 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.694 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:48.694 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:48.694 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:48.694 18:22:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:53.965 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:53.965 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:53.965 Found net devices under 0000:86:00.0: cvl_0_0 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:53.965 Found net devices under 0000:86:00.1: cvl_0_1 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:53.965 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:54.224 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:54.224 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:54.224 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:54.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:54.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:07:54.224 00:07:54.224 --- 10.0.0.2 ping statistics --- 00:07:54.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.224 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:07:54.224 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:54.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:54.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:07:54.224 00:07:54.224 --- 10.0.0.1 ping statistics --- 00:07:54.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.224 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:07:54.224 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:54.224 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:54.224 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:54.224 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:54.224 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:54.224 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:54.224 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:54.224 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:54.224 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:54.224 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:54.224 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:54.224 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:54.224 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:54.224 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3778802 00:07:54.224 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3778802 00:07:54.224 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:54.224 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 3778802 ']' 00:07:54.224 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.224 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:54.224 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.224 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:54.224 18:22:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:54.224 [2024-07-15 18:22:39.658578] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:54.225 [2024-07-15 18:22:39.658618] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.225 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.225 [2024-07-15 18:22:39.727794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:54.483 [2024-07-15 18:22:39.806677] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:54.483 [2024-07-15 18:22:39.806716] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:54.483 [2024-07-15 18:22:39.806723] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:54.483 [2024-07-15 18:22:39.806729] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:54.483 [2024-07-15 18:22:39.806733] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:54.483 [2024-07-15 18:22:39.806786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.483 [2024-07-15 18:22:39.809355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.483 [2024-07-15 18:22:39.809401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.483 [2024-07-15 18:22:39.809402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:55.050 18:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:55.050 18:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:07:55.050 18:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:55.050 18:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:55.050 18:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:55.050 18:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:55.050 18:22:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:55.050 18:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.050 18:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:55.050 [2024-07-15 18:22:40.499244] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.050 18:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.050 18:22:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:55.050 18:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.050 18:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:55.050 18:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.050 18:22:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:55.050 18:22:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:55.050 18:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.050 18:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:55.050 18:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.050 18:22:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:55.050 18:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.050 18:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:55.050 18:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.050 18:22:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:55.050 18:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.050 18:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:55.050 [2024-07-15 18:22:40.550719] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:55.050 18:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.050 18:22:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:55.050 18:22:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:55.050 18:22:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:58.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:01.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:04.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:11.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:11.526 18:22:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:11.526 18:22:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:11.526 18:22:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:11.526 18:22:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:11.526 18:22:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:11.526 18:22:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:11.526 18:22:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:11.526 18:22:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:11.526 rmmod nvme_tcp 00:08:11.526 rmmod nvme_fabrics 00:08:11.526 rmmod nvme_keyring 00:08:11.526 18:22:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:11.526 18:22:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:11.526 18:22:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:11.526 18:22:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3778802 ']' 00:08:11.526 18:22:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3778802 00:08:11.526 18:22:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3778802 ']' 00:08:11.526 18:22:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 3778802 00:08:11.526 18:22:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:08:11.526 18:22:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:11.526 18:22:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3778802 00:08:11.526 18:22:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:11.526 18:22:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:11.526 18:22:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3778802' 00:08:11.526 killing process with pid 3778802 00:08:11.526 18:22:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 3778802 00:08:11.526 18:22:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 3778802 00:08:11.526 18:22:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:11.526 18:22:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:11.526 18:22:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:11.526 18:22:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:11.526 18:22:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:11.526 18:22:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.526 18:22:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.526 18:22:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.122 18:22:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:14.122 00:08:14.122 real 0m25.347s 00:08:14.122 user 1m9.973s 00:08:14.122 sys 0m5.524s 00:08:14.122 18:22:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.122 18:22:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:14.122 ************************************ 00:08:14.122 END TEST nvmf_connect_disconnect 00:08:14.122 ************************************ 00:08:14.122 18:22:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:14.122 18:22:59 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:14.122 18:22:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:14.122 18:22:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.122 18:22:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:14.122 ************************************ 00:08:14.122 START TEST nvmf_multitarget 00:08:14.122 ************************************ 00:08:14.122 18:22:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:14.122 * Looking for test storage... 00:08:14.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:14.122 18:22:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:14.122 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:14.122 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.122 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.122 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.122 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.122 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.122 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.122 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.122 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.122 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.122 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.122 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:14.122 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:14.122 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.122 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.122 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:14.122 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.123 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:14.123 18:22:59 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.123 18:22:59 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.123 18:22:59 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.123 18:22:59 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.123 18:22:59 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.123 18:22:59 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.123 18:22:59 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:14.123 18:22:59 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.123 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:14.123 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:14.123 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:14.123 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.123 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.123 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.123 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:14.123 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:14.123 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:14.123 18:22:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:14.123 18:22:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:14.123 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:14.123 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.123 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:14.123 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:14.123 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:14.123 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.123 18:22:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.123 18:22:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.123 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:14.123 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:14.123 18:22:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:14.123 18:22:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:19.394 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:19.394 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:19.394 Found net devices under 0000:86:00.0: cvl_0_0 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:19.394 Found net devices under 0000:86:00.1: cvl_0_1 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:19.394 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:19.652 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:19.652 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:19.652 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:19.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:08:19.652 00:08:19.652 --- 10.0.0.2 ping statistics --- 00:08:19.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.652 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:08:19.652 18:23:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:19.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:08:19.652 00:08:19.652 --- 10.0.0.1 ping statistics --- 00:08:19.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.652 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:08:19.652 18:23:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.652 18:23:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:19.652 18:23:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:19.652 18:23:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.652 18:23:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:19.652 18:23:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:19.652 18:23:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.652 18:23:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:19.652 18:23:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:19.652 18:23:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:19.652 18:23:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:19.652 18:23:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:19.652 18:23:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:19.652 18:23:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3785243 00:08:19.652 18:23:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3785243 00:08:19.652 18:23:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:19.652 18:23:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 3785243 ']' 00:08:19.652 18:23:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.652 18:23:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:19.652 18:23:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.652 18:23:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:19.652 18:23:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:19.652 [2024-07-15 18:23:05.089113] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:08:19.652 [2024-07-15 18:23:05.089152] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.652 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.652 [2024-07-15 18:23:05.157585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:19.910 [2024-07-15 18:23:05.231419] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.910 [2024-07-15 18:23:05.231458] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.910 [2024-07-15 18:23:05.231465] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.910 [2024-07-15 18:23:05.231470] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.910 [2024-07-15 18:23:05.231479] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.910 [2024-07-15 18:23:05.231556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.910 [2024-07-15 18:23:05.231663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.910 [2024-07-15 18:23:05.231769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.910 [2024-07-15 18:23:05.231771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:20.476 18:23:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:20.476 18:23:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:08:20.476 18:23:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:20.476 18:23:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:20.476 18:23:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:20.476 18:23:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.476 18:23:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:20.476 18:23:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:20.476 18:23:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:20.733 18:23:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:20.733 18:23:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:20.733 "nvmf_tgt_1" 00:08:20.733 18:23:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:20.733 "nvmf_tgt_2" 00:08:20.733 18:23:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:20.733 18:23:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:20.990 18:23:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:20.990 18:23:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:20.990 true 00:08:20.990 18:23:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:20.990 true 00:08:20.990 18:23:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:20.990 18:23:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:21.248 18:23:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:21.248 18:23:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:21.248 18:23:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:21.248 18:23:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:21.249 18:23:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:21.249 18:23:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:21.249 18:23:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:21.249 18:23:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:21.249 18:23:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:21.249 rmmod nvme_tcp 00:08:21.249 rmmod nvme_fabrics 00:08:21.249 rmmod nvme_keyring 00:08:21.249 18:23:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:21.249 18:23:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:21.249 18:23:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:21.249 18:23:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3785243 ']' 00:08:21.249 18:23:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3785243 00:08:21.249 18:23:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 3785243 ']' 00:08:21.249 18:23:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 3785243 00:08:21.249 18:23:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:08:21.249 18:23:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:21.249 18:23:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3785243 00:08:21.249 18:23:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:21.249 18:23:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:21.249 18:23:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3785243' 00:08:21.249 killing process with pid 3785243 00:08:21.249 18:23:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 3785243 00:08:21.249 18:23:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 3785243 00:08:21.508 18:23:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:21.508 18:23:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:21.508 18:23:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:21.508 18:23:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:21.508 18:23:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:21.508 18:23:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.508 18:23:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:21.508 18:23:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.041 18:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:24.041 00:08:24.041 real 0m9.893s 00:08:24.041 user 0m9.268s 00:08:24.041 sys 0m4.749s 00:08:24.041 18:23:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:24.041 18:23:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:24.041 ************************************ 00:08:24.041 END TEST nvmf_multitarget 00:08:24.041 ************************************ 00:08:24.041 18:23:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:24.041 18:23:09 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:24.041 18:23:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:24.041 18:23:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.041 18:23:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:24.041 ************************************ 00:08:24.041 START TEST nvmf_rpc 00:08:24.041 ************************************ 00:08:24.041 18:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:24.041 * Looking for test storage... 00:08:24.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:24.041 18:23:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:24.041 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:24.041 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:24.041 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:24.041 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:24.041 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:24.041 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:24.041 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:24.041 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:24.041 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:24.041 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:24.041 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:24.041 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:24.041 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:24.041 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:24.041 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:24.041 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:24.041 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:24.041 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:24.041 18:23:09 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.041 18:23:09 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.041 18:23:09 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.041 18:23:09 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.041 18:23:09 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.041 18:23:09 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.041 18:23:09 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:24.041 18:23:09 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.041 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:24.042 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:24.042 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:24.042 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:24.042 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:24.042 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:24.042 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:24.042 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:24.042 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:24.042 18:23:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:24.042 18:23:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:24.042 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:24.042 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:24.042 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:24.042 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:24.042 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:24.042 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.042 18:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:24.042 18:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.042 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:24.042 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:24.042 18:23:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:24.042 18:23:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:29.310 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:29.310 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:29.310 Found net devices under 0000:86:00.0: cvl_0_0 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:29.310 Found net devices under 0000:86:00.1: cvl_0_1 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:29.310 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:29.311 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:29.311 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:29.311 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:29.311 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:29.311 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:29.311 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:29.311 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:29.311 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:29.568 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:29.568 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:29.568 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:29.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:29.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:08:29.568 00:08:29.568 --- 10.0.0.2 ping statistics --- 00:08:29.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.568 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:08:29.568 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:29.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:29.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:08:29.568 00:08:29.568 --- 10.0.0.1 ping statistics --- 00:08:29.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.568 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:08:29.568 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:29.568 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:08:29.568 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:29.568 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:29.568 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:29.568 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:29.568 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:29.568 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:29.568 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:29.568 18:23:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:29.568 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:29.568 18:23:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:29.568 18:23:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.568 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3789042 00:08:29.568 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3789042 00:08:29.568 18:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:29.568 18:23:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 3789042 ']' 00:08:29.568 18:23:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.568 18:23:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:29.568 18:23:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.568 18:23:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:29.568 18:23:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.568 [2024-07-15 18:23:15.041248] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:08:29.568 [2024-07-15 18:23:15.041295] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.568 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.568 [2024-07-15 18:23:15.112496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:29.827 [2024-07-15 18:23:15.194321] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.827 [2024-07-15 18:23:15.194362] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.827 [2024-07-15 18:23:15.194369] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.827 [2024-07-15 18:23:15.194378] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.827 [2024-07-15 18:23:15.194383] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.827 [2024-07-15 18:23:15.194444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.827 [2024-07-15 18:23:15.194471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:29.827 [2024-07-15 18:23:15.194644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.827 [2024-07-15 18:23:15.194646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:30.393 18:23:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:30.393 18:23:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:30.394 18:23:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:30.394 18:23:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:30.394 18:23:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.394 18:23:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:30.394 18:23:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:30.394 18:23:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.394 18:23:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.394 18:23:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.394 18:23:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:08:30.394 "tick_rate": 2100000000, 00:08:30.394 "poll_groups": [ 00:08:30.394 { 00:08:30.394 "name": "nvmf_tgt_poll_group_000", 00:08:30.394 "admin_qpairs": 0, 00:08:30.394 "io_qpairs": 0, 00:08:30.394 "current_admin_qpairs": 0, 00:08:30.394 "current_io_qpairs": 0, 00:08:30.394 "pending_bdev_io": 0, 00:08:30.394 "completed_nvme_io": 0, 00:08:30.394 "transports": [] 00:08:30.394 }, 00:08:30.394 { 00:08:30.394 "name": "nvmf_tgt_poll_group_001", 00:08:30.394 "admin_qpairs": 0, 00:08:30.394 "io_qpairs": 0, 00:08:30.394 "current_admin_qpairs": 0, 00:08:30.394 "current_io_qpairs": 0, 00:08:30.394 "pending_bdev_io": 0, 00:08:30.394 "completed_nvme_io": 0, 00:08:30.394 "transports": [] 00:08:30.394 }, 00:08:30.394 { 00:08:30.394 "name": "nvmf_tgt_poll_group_002", 00:08:30.394 "admin_qpairs": 0, 00:08:30.394 "io_qpairs": 0, 00:08:30.394 "current_admin_qpairs": 0, 00:08:30.394 "current_io_qpairs": 0, 00:08:30.394 "pending_bdev_io": 0, 00:08:30.394 "completed_nvme_io": 0, 00:08:30.394 "transports": [] 00:08:30.394 }, 00:08:30.394 { 00:08:30.394 "name": "nvmf_tgt_poll_group_003", 00:08:30.394 "admin_qpairs": 0, 00:08:30.394 "io_qpairs": 0, 00:08:30.394 "current_admin_qpairs": 0, 00:08:30.394 "current_io_qpairs": 0, 00:08:30.394 "pending_bdev_io": 0, 00:08:30.394 "completed_nvme_io": 0, 00:08:30.394 "transports": [] 00:08:30.394 } 00:08:30.394 ] 00:08:30.394 }' 00:08:30.394 18:23:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:30.394 18:23:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:30.394 18:23:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:30.394 18:23:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:30.394 18:23:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:30.394 18:23:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:30.653 18:23:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:30.653 18:23:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:30.653 18:23:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.653 18:23:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.653 [2024-07-15 18:23:15.997560] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:08:30.653 "tick_rate": 2100000000, 00:08:30.653 "poll_groups": [ 00:08:30.653 { 00:08:30.653 "name": "nvmf_tgt_poll_group_000", 00:08:30.653 "admin_qpairs": 0, 00:08:30.653 "io_qpairs": 0, 00:08:30.653 "current_admin_qpairs": 0, 00:08:30.653 "current_io_qpairs": 0, 00:08:30.653 "pending_bdev_io": 0, 00:08:30.653 "completed_nvme_io": 0, 00:08:30.653 "transports": [ 00:08:30.653 { 00:08:30.653 "trtype": "TCP" 00:08:30.653 } 00:08:30.653 ] 00:08:30.653 }, 00:08:30.653 { 00:08:30.653 "name": "nvmf_tgt_poll_group_001", 00:08:30.653 "admin_qpairs": 0, 00:08:30.653 "io_qpairs": 0, 00:08:30.653 "current_admin_qpairs": 0, 00:08:30.653 "current_io_qpairs": 0, 00:08:30.653 "pending_bdev_io": 0, 00:08:30.653 "completed_nvme_io": 0, 00:08:30.653 "transports": [ 00:08:30.653 { 00:08:30.653 "trtype": "TCP" 00:08:30.653 } 00:08:30.653 ] 00:08:30.653 }, 00:08:30.653 { 00:08:30.653 "name": "nvmf_tgt_poll_group_002", 00:08:30.653 "admin_qpairs": 0, 00:08:30.653 "io_qpairs": 0, 00:08:30.653 "current_admin_qpairs": 0, 00:08:30.653 "current_io_qpairs": 0, 00:08:30.653 "pending_bdev_io": 0, 00:08:30.653 "completed_nvme_io": 0, 00:08:30.653 "transports": [ 00:08:30.653 { 00:08:30.653 "trtype": "TCP" 00:08:30.653 } 00:08:30.653 ] 00:08:30.653 }, 00:08:30.653 { 00:08:30.653 "name": "nvmf_tgt_poll_group_003", 00:08:30.653 "admin_qpairs": 0, 00:08:30.653 "io_qpairs": 0, 00:08:30.653 "current_admin_qpairs": 0, 00:08:30.653 "current_io_qpairs": 0, 00:08:30.653 "pending_bdev_io": 0, 00:08:30.653 "completed_nvme_io": 0, 00:08:30.653 "transports": [ 00:08:30.653 { 00:08:30.653 "trtype": "TCP" 00:08:30.653 } 00:08:30.653 ] 00:08:30.653 } 00:08:30.653 ] 00:08:30.653 }' 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.653 Malloc1 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.653 [2024-07-15 18:23:16.161238] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:30.653 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:30.654 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:30.654 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:30.654 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:30.654 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:30.654 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:30.654 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:30.654 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:30.654 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:30.654 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:30.654 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:30.654 [2024-07-15 18:23:16.189740] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:08:30.912 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:30.912 could not add new controller: failed to write to nvme-fabrics device 00:08:30.912 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:30.912 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:30.912 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:30.912 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:30.912 18:23:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:30.912 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.912 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.912 18:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.912 18:23:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:31.848 18:23:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:31.848 18:23:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:31.848 18:23:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:31.848 18:23:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:31.848 18:23:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:34.379 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:34.379 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:34.379 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:34.379 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:34.379 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:34.379 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:34.379 18:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:34.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.379 18:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:34.379 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:34.379 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:34.379 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:34.379 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:34.379 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:34.379 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:34.380 18:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:34.380 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.380 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.380 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.380 18:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:34.380 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:34.380 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:34.380 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:34.380 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:34.380 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:34.380 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:34.380 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:34.380 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:34.380 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:34.380 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:34.380 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:34.380 [2024-07-15 18:23:19.453876] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:08:34.380 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:34.380 could not add new controller: failed to write to nvme-fabrics device 00:08:34.380 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:34.380 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:34.380 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:34.380 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:34.380 18:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:34.380 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.380 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.380 18:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.380 18:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:35.314 18:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:35.314 18:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:35.314 18:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:35.314 18:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:35.314 18:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:37.215 18:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:37.215 18:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:37.215 18:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:37.215 18:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:37.215 18:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:37.216 18:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:37.216 18:23:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:37.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.216 18:23:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:37.216 18:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:37.216 18:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:37.216 18:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:37.216 18:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:37.216 18:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:37.216 18:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:37.216 18:23:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:37.216 18:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.216 18:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.216 18:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.216 18:23:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:08:37.216 18:23:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:37.216 18:23:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:37.216 18:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.216 18:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.216 18:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.216 18:23:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:37.216 18:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.216 18:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.216 [2024-07-15 18:23:22.762367] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.216 18:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.216 18:23:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:37.216 18:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.216 18:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.475 18:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.475 18:23:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:37.475 18:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.475 18:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.475 18:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.475 18:23:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:38.450 18:23:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:38.450 18:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:38.450 18:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:38.450 18:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:38.450 18:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:40.982 18:23:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:40.982 18:23:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:40.982 18:23:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:40.982 18:23:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:40.982 18:23:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:40.982 18:23:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:40.982 18:23:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:40.982 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.982 18:23:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:40.982 18:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:40.982 18:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:40.982 18:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:40.982 18:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:40.982 18:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:40.982 18:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:40.982 18:23:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:40.982 18:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.982 18:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.982 18:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.982 18:23:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:40.982 18:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.982 18:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.982 18:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.982 18:23:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:40.982 18:23:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:40.983 18:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.983 18:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.983 18:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.983 18:23:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.983 18:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.983 18:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.983 [2024-07-15 18:23:26.102521] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.983 18:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.983 18:23:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:40.983 18:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.983 18:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.983 18:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.983 18:23:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:40.983 18:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.983 18:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.983 18:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.983 18:23:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:41.919 18:23:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:41.919 18:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:41.919 18:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:41.919 18:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:41.919 18:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:43.823 18:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:43.823 18:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:43.823 18:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:43.823 18:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:43.823 18:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:43.823 18:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:43.823 18:23:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:43.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.823 18:23:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:43.823 18:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:43.823 18:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:43.823 18:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:43.823 18:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:43.823 18:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:43.823 18:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:43.823 18:23:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:43.823 18:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.823 18:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.823 18:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.823 18:23:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:43.823 18:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.823 18:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.823 18:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.823 18:23:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:43.823 18:23:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:43.823 18:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.823 18:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.082 18:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.082 18:23:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:44.082 18:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.082 18:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.082 [2024-07-15 18:23:29.384711] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:44.082 18:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.082 18:23:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:44.082 18:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.082 18:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.082 18:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.082 18:23:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:44.082 18:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.082 18:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.082 18:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.082 18:23:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:45.019 18:23:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:45.019 18:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:45.019 18:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:45.019 18:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:45.019 18:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:47.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.549 [2024-07-15 18:23:32.707174] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.549 18:23:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:48.483 18:23:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:48.483 18:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:48.483 18:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:48.483 18:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:48.483 18:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:50.384 18:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:50.384 18:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:50.384 18:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:50.384 18:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:50.384 18:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:50.384 18:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:50.384 18:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:50.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.384 18:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:50.384 18:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:50.384 18:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:50.384 18:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:50.642 18:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:50.642 18:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:50.642 18:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:50.642 18:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:50.642 18:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.642 18:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.642 18:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.642 18:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:50.642 18:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.642 18:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.642 18:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.642 18:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:50.642 18:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:50.642 18:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.642 18:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.642 18:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.642 18:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:50.642 18:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.642 18:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.642 [2024-07-15 18:23:35.995808] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:50.642 18:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.642 18:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:50.642 18:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.642 18:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.642 18:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.642 18:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:50.642 18:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.642 18:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.642 18:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.642 18:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:52.018 18:23:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:52.018 18:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:52.018 18:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:52.018 18:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:52.018 18:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:53.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.921 [2024-07-15 18:23:39.399249] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.921 [2024-07-15 18:23:39.447362] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.921 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.181 [2024-07-15 18:23:39.499500] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.181 [2024-07-15 18:23:39.547669] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.181 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.182 [2024-07-15 18:23:39.595831] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:08:54.182 "tick_rate": 2100000000, 00:08:54.182 "poll_groups": [ 00:08:54.182 { 00:08:54.182 "name": "nvmf_tgt_poll_group_000", 00:08:54.182 "admin_qpairs": 2, 00:08:54.182 "io_qpairs": 168, 00:08:54.182 "current_admin_qpairs": 0, 00:08:54.182 "current_io_qpairs": 0, 00:08:54.182 "pending_bdev_io": 0, 00:08:54.182 "completed_nvme_io": 268, 00:08:54.182 "transports": [ 00:08:54.182 { 00:08:54.182 "trtype": "TCP" 00:08:54.182 } 00:08:54.182 ] 00:08:54.182 }, 00:08:54.182 { 00:08:54.182 "name": "nvmf_tgt_poll_group_001", 00:08:54.182 "admin_qpairs": 2, 00:08:54.182 "io_qpairs": 168, 00:08:54.182 "current_admin_qpairs": 0, 00:08:54.182 "current_io_qpairs": 0, 00:08:54.182 "pending_bdev_io": 0, 00:08:54.182 "completed_nvme_io": 316, 00:08:54.182 "transports": [ 00:08:54.182 { 00:08:54.182 "trtype": "TCP" 00:08:54.182 } 00:08:54.182 ] 00:08:54.182 }, 00:08:54.182 { 00:08:54.182 "name": "nvmf_tgt_poll_group_002", 00:08:54.182 "admin_qpairs": 1, 00:08:54.182 "io_qpairs": 168, 00:08:54.182 "current_admin_qpairs": 0, 00:08:54.182 "current_io_qpairs": 0, 00:08:54.182 "pending_bdev_io": 0, 00:08:54.182 "completed_nvme_io": 220, 00:08:54.182 "transports": [ 00:08:54.182 { 00:08:54.182 "trtype": "TCP" 00:08:54.182 } 00:08:54.182 ] 00:08:54.182 }, 00:08:54.182 { 00:08:54.182 "name": "nvmf_tgt_poll_group_003", 00:08:54.182 "admin_qpairs": 2, 00:08:54.182 "io_qpairs": 168, 00:08:54.182 "current_admin_qpairs": 0, 00:08:54.182 "current_io_qpairs": 0, 00:08:54.182 "pending_bdev_io": 0, 00:08:54.182 "completed_nvme_io": 218, 00:08:54.182 "transports": [ 00:08:54.182 { 00:08:54.182 "trtype": "TCP" 00:08:54.182 } 00:08:54.182 ] 00:08:54.182 } 00:08:54.182 ] 00:08:54.182 }' 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:54.182 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:54.441 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:08:54.441 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:08:54.441 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:54.441 18:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:08:54.441 18:23:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:54.441 18:23:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:08:54.441 18:23:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:54.441 18:23:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:08:54.441 18:23:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:54.441 18:23:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:54.441 rmmod nvme_tcp 00:08:54.441 rmmod nvme_fabrics 00:08:54.441 rmmod nvme_keyring 00:08:54.441 18:23:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:54.441 18:23:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:08:54.441 18:23:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:08:54.441 18:23:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3789042 ']' 00:08:54.441 18:23:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3789042 00:08:54.441 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 3789042 ']' 00:08:54.441 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 3789042 00:08:54.441 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:08:54.441 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:54.441 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3789042 00:08:54.441 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:54.441 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:54.441 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3789042' 00:08:54.441 killing process with pid 3789042 00:08:54.441 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 3789042 00:08:54.441 18:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 3789042 00:08:54.699 18:23:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:54.699 18:23:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:54.699 18:23:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:54.699 18:23:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:54.699 18:23:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:54.699 18:23:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.699 18:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:54.699 18:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.599 18:23:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:56.599 00:08:56.599 real 0m33.045s 00:08:56.599 user 1m40.935s 00:08:56.599 sys 0m6.046s 00:08:56.599 18:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:56.599 18:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.599 ************************************ 00:08:56.599 END TEST nvmf_rpc 00:08:56.599 ************************************ 00:08:56.857 18:23:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:56.858 18:23:42 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:56.858 18:23:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:56.858 18:23:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:56.858 18:23:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:56.858 ************************************ 00:08:56.858 START TEST nvmf_invalid 00:08:56.858 ************************************ 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:56.858 * Looking for test storage... 00:08:56.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:08:56.858 18:23:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:03.424 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:03.424 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:03.424 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:03.424 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:03.424 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:03.424 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:03.424 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:03.424 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:03.424 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:03.424 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:03.424 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:03.424 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:03.424 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:03.424 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:03.424 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:03.424 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:03.424 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:03.424 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:03.424 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:03.424 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:03.424 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:03.424 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:03.424 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:03.424 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:03.425 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:03.425 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:03.425 Found net devices under 0000:86:00.0: cvl_0_0 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:03.425 Found net devices under 0000:86:00.1: cvl_0_1 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:03.425 18:23:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:03.425 18:23:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:03.425 18:23:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:03.425 18:23:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:03.425 18:23:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:03.425 18:23:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:03.425 18:23:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:03.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:03.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:09:03.425 00:09:03.425 --- 10.0.0.2 ping statistics --- 00:09:03.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.425 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:09:03.425 18:23:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:03.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:03.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:09:03.425 00:09:03.425 --- 10.0.0.1 ping statistics --- 00:09:03.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.425 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:09:03.425 18:23:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:03.425 18:23:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:03.425 18:23:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:03.425 18:23:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:03.425 18:23:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:03.425 18:23:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:03.425 18:23:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:03.425 18:23:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:03.425 18:23:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:03.425 18:23:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:03.425 18:23:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:03.425 18:23:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:03.425 18:23:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:03.425 18:23:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3796810 00:09:03.425 18:23:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3796810 00:09:03.425 18:23:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:03.425 18:23:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 3796810 ']' 00:09:03.425 18:23:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.425 18:23:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:03.425 18:23:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.425 18:23:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:03.425 18:23:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:03.425 [2024-07-15 18:23:48.211655] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:09:03.425 [2024-07-15 18:23:48.211697] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.425 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.425 [2024-07-15 18:23:48.279644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:03.425 [2024-07-15 18:23:48.355975] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.425 [2024-07-15 18:23:48.356012] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.425 [2024-07-15 18:23:48.356019] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:03.425 [2024-07-15 18:23:48.356024] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:03.425 [2024-07-15 18:23:48.356029] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.425 [2024-07-15 18:23:48.356078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.425 [2024-07-15 18:23:48.356107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:03.425 [2024-07-15 18:23:48.356134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.425 [2024-07-15 18:23:48.356136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:03.682 18:23:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:03.682 18:23:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:09:03.682 18:23:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:03.682 18:23:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:03.682 18:23:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:03.682 18:23:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.682 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:03.682 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode29798 00:09:03.682 [2024-07-15 18:23:49.206631] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:03.682 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:03.682 { 00:09:03.682 "nqn": "nqn.2016-06.io.spdk:cnode29798", 00:09:03.682 "tgt_name": "foobar", 00:09:03.682 "method": "nvmf_create_subsystem", 00:09:03.682 "req_id": 1 00:09:03.682 } 00:09:03.682 Got JSON-RPC error response 00:09:03.683 response: 00:09:03.683 { 00:09:03.683 "code": -32603, 00:09:03.683 "message": "Unable to find target foobar" 00:09:03.683 }' 00:09:03.683 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:03.683 { 00:09:03.683 "nqn": "nqn.2016-06.io.spdk:cnode29798", 00:09:03.683 "tgt_name": "foobar", 00:09:03.683 "method": "nvmf_create_subsystem", 00:09:03.683 "req_id": 1 00:09:03.683 } 00:09:03.683 Got JSON-RPC error response 00:09:03.683 response: 00:09:03.683 { 00:09:03.683 "code": -32603, 00:09:03.683 "message": "Unable to find target foobar" 00:09:03.683 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:03.940 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:03.940 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode8667 00:09:03.940 [2024-07-15 18:23:49.395305] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8667: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:03.940 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:03.940 { 00:09:03.940 "nqn": "nqn.2016-06.io.spdk:cnode8667", 00:09:03.940 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:03.940 "method": "nvmf_create_subsystem", 00:09:03.940 "req_id": 1 00:09:03.940 } 00:09:03.940 Got JSON-RPC error response 00:09:03.940 response: 00:09:03.940 { 00:09:03.940 "code": -32602, 00:09:03.940 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:03.940 }' 00:09:03.940 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:03.940 { 00:09:03.940 "nqn": "nqn.2016-06.io.spdk:cnode8667", 00:09:03.940 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:03.940 "method": "nvmf_create_subsystem", 00:09:03.940 "req_id": 1 00:09:03.940 } 00:09:03.940 Got JSON-RPC error response 00:09:03.940 response: 00:09:03.940 { 00:09:03.940 "code": -32602, 00:09:03.940 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:03.940 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:03.940 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:03.940 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode13818 00:09:04.200 [2024-07-15 18:23:49.575886] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13818: invalid model number 'SPDK_Controller' 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:04.200 { 00:09:04.200 "nqn": "nqn.2016-06.io.spdk:cnode13818", 00:09:04.200 "model_number": "SPDK_Controller\u001f", 00:09:04.200 "method": "nvmf_create_subsystem", 00:09:04.200 "req_id": 1 00:09:04.200 } 00:09:04.200 Got JSON-RPC error response 00:09:04.200 response: 00:09:04.200 { 00:09:04.200 "code": -32602, 00:09:04.200 "message": "Invalid MN SPDK_Controller\u001f" 00:09:04.200 }' 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:04.200 { 00:09:04.200 "nqn": "nqn.2016-06.io.spdk:cnode13818", 00:09:04.200 "model_number": "SPDK_Controller\u001f", 00:09:04.200 "method": "nvmf_create_subsystem", 00:09:04.200 "req_id": 1 00:09:04.200 } 00:09:04.200 Got JSON-RPC error response 00:09:04.200 response: 00:09:04.200 { 00:09:04.200 "code": -32602, 00:09:04.200 "message": "Invalid MN SPDK_Controller\u001f" 00:09:04.200 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:09:04.200 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ & == \- ]] 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '&?)>IPpK>dM|tK'\''O-F!1F' 00:09:04.201 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '&?)>IPpK>dM|tK'\''O-F!1F' nqn.2016-06.io.spdk:cnode2114 00:09:04.459 [2024-07-15 18:23:49.892965] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2114: invalid serial number '&?)>IPpK>dM|tK'O-F!1F' 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:04.459 { 00:09:04.459 "nqn": "nqn.2016-06.io.spdk:cnode2114", 00:09:04.459 "serial_number": "&?)>IPpK>dM|tK'\''O-F!1F", 00:09:04.459 "method": "nvmf_create_subsystem", 00:09:04.459 "req_id": 1 00:09:04.459 } 00:09:04.459 Got JSON-RPC error response 00:09:04.459 response: 00:09:04.459 { 00:09:04.459 "code": -32602, 00:09:04.459 "message": "Invalid SN &?)>IPpK>dM|tK'\''O-F!1F" 00:09:04.459 }' 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:04.459 { 00:09:04.459 "nqn": "nqn.2016-06.io.spdk:cnode2114", 00:09:04.459 "serial_number": "&?)>IPpK>dM|tK'O-F!1F", 00:09:04.459 "method": "nvmf_create_subsystem", 00:09:04.459 "req_id": 1 00:09:04.459 } 00:09:04.459 Got JSON-RPC error response 00:09:04.459 response: 00:09:04.459 { 00:09:04.459 "code": -32602, 00:09:04.459 "message": "Invalid SN &?)>IPpK>dM|tK'O-F!1F" 00:09:04.459 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:09:04.459 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.460 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.460 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:09:04.460 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:09:04.460 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:09:04.460 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.460 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.460 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:09:04.460 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:09:04.460 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:09:04.460 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.460 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.460 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:09:04.460 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:04.460 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:09:04.460 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.460 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.460 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:04.460 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:04.460 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:04.460 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.460 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.460 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:09:04.460 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:09:04.460 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:09:04.460 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.460 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.460 18:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:04.460 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:04.460 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:04.460 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.460 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.460 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:09:04.460 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:09:04.460 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:09:04.460 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.460 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.460 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ T == \- ]] 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'TTDgEyQRuv)hkOlKK8/k~]Q%z4R >V&'\''T( vFEC' 00:09:04.718 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'TTDgEyQRuv)hkOlKK8/k~]Q%z4R >V&'\''T( vFEC' nqn.2016-06.io.spdk:cnode26547 00:09:04.975 [2024-07-15 18:23:50.358590] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26547: invalid model number 'TTDgEyQRuv)hkOlKK8/k~]Q%z4R >V&'T( vFEC' 00:09:04.975 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:04.975 { 00:09:04.975 "nqn": "nqn.2016-06.io.spdk:cnode26547", 00:09:04.975 "model_number": "TTDgEyQRuv\u007f)hkOlKK\u007f8/k~]Q%z4R >V&'\''T( vFEC", 00:09:04.975 "method": "nvmf_create_subsystem", 00:09:04.975 "req_id": 1 00:09:04.975 } 00:09:04.975 Got JSON-RPC error response 00:09:04.975 response: 00:09:04.975 { 00:09:04.975 "code": -32602, 00:09:04.975 "message": "Invalid MN TTDgEyQRuv\u007f)hkOlKK\u007f8/k~]Q%z4R >V&'\''T( vFEC" 00:09:04.975 }' 00:09:04.975 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:04.975 { 00:09:04.975 "nqn": "nqn.2016-06.io.spdk:cnode26547", 00:09:04.975 "model_number": "TTDgEyQRuv\u007f)hkOlKK\u007f8/k~]Q%z4R >V&'T( vFEC", 00:09:04.975 "method": "nvmf_create_subsystem", 00:09:04.975 "req_id": 1 00:09:04.975 } 00:09:04.975 Got JSON-RPC error response 00:09:04.975 response: 00:09:04.975 { 00:09:04.975 "code": -32602, 00:09:04.975 "message": "Invalid MN TTDgEyQRuv\u007f)hkOlKK\u007f8/k~]Q%z4R >V&'T( vFEC" 00:09:04.975 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:04.975 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:05.246 [2024-07-15 18:23:50.547284] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.246 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:05.246 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:05.246 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:09:05.246 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:05.246 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:09:05.246 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:05.504 [2024-07-15 18:23:50.913729] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:05.504 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:05.504 { 00:09:05.504 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:05.504 "listen_address": { 00:09:05.504 "trtype": "tcp", 00:09:05.504 "traddr": "", 00:09:05.504 "trsvcid": "4421" 00:09:05.504 }, 00:09:05.504 "method": "nvmf_subsystem_remove_listener", 00:09:05.504 "req_id": 1 00:09:05.504 } 00:09:05.504 Got JSON-RPC error response 00:09:05.504 response: 00:09:05.504 { 00:09:05.504 "code": -32602, 00:09:05.504 "message": "Invalid parameters" 00:09:05.504 }' 00:09:05.504 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:05.504 { 00:09:05.504 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:05.504 "listen_address": { 00:09:05.504 "trtype": "tcp", 00:09:05.504 "traddr": "", 00:09:05.504 "trsvcid": "4421" 00:09:05.504 }, 00:09:05.504 "method": "nvmf_subsystem_remove_listener", 00:09:05.504 "req_id": 1 00:09:05.504 } 00:09:05.504 Got JSON-RPC error response 00:09:05.504 response: 00:09:05.504 { 00:09:05.504 "code": -32602, 00:09:05.504 "message": "Invalid parameters" 00:09:05.504 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:05.504 18:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19435 -i 0 00:09:05.762 [2024-07-15 18:23:51.090282] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19435: invalid cntlid range [0-65519] 00:09:05.762 18:23:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:05.762 { 00:09:05.762 "nqn": "nqn.2016-06.io.spdk:cnode19435", 00:09:05.762 "min_cntlid": 0, 00:09:05.762 "method": "nvmf_create_subsystem", 00:09:05.762 "req_id": 1 00:09:05.762 } 00:09:05.762 Got JSON-RPC error response 00:09:05.762 response: 00:09:05.762 { 00:09:05.762 "code": -32602, 00:09:05.762 "message": "Invalid cntlid range [0-65519]" 00:09:05.762 }' 00:09:05.762 18:23:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:05.762 { 00:09:05.762 "nqn": "nqn.2016-06.io.spdk:cnode19435", 00:09:05.762 "min_cntlid": 0, 00:09:05.762 "method": "nvmf_create_subsystem", 00:09:05.762 "req_id": 1 00:09:05.762 } 00:09:05.762 Got JSON-RPC error response 00:09:05.762 response: 00:09:05.762 { 00:09:05.762 "code": -32602, 00:09:05.762 "message": "Invalid cntlid range [0-65519]" 00:09:05.762 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:05.762 18:23:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10560 -i 65520 00:09:05.762 [2024-07-15 18:23:51.262854] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10560: invalid cntlid range [65520-65519] 00:09:05.762 18:23:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:05.762 { 00:09:05.762 "nqn": "nqn.2016-06.io.spdk:cnode10560", 00:09:05.762 "min_cntlid": 65520, 00:09:05.762 "method": "nvmf_create_subsystem", 00:09:05.762 "req_id": 1 00:09:05.762 } 00:09:05.762 Got JSON-RPC error response 00:09:05.762 response: 00:09:05.762 { 00:09:05.762 "code": -32602, 00:09:05.762 "message": "Invalid cntlid range [65520-65519]" 00:09:05.762 }' 00:09:05.762 18:23:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:05.762 { 00:09:05.762 "nqn": "nqn.2016-06.io.spdk:cnode10560", 00:09:05.762 "min_cntlid": 65520, 00:09:05.762 "method": "nvmf_create_subsystem", 00:09:05.762 "req_id": 1 00:09:05.762 } 00:09:05.762 Got JSON-RPC error response 00:09:05.762 response: 00:09:05.762 { 00:09:05.762 "code": -32602, 00:09:05.762 "message": "Invalid cntlid range [65520-65519]" 00:09:05.762 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:05.762 18:23:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20992 -I 0 00:09:06.020 [2024-07-15 18:23:51.435479] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20992: invalid cntlid range [1-0] 00:09:06.020 18:23:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:06.020 { 00:09:06.020 "nqn": "nqn.2016-06.io.spdk:cnode20992", 00:09:06.020 "max_cntlid": 0, 00:09:06.020 "method": "nvmf_create_subsystem", 00:09:06.020 "req_id": 1 00:09:06.020 } 00:09:06.020 Got JSON-RPC error response 00:09:06.020 response: 00:09:06.020 { 00:09:06.020 "code": -32602, 00:09:06.020 "message": "Invalid cntlid range [1-0]" 00:09:06.020 }' 00:09:06.020 18:23:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:06.020 { 00:09:06.020 "nqn": "nqn.2016-06.io.spdk:cnode20992", 00:09:06.020 "max_cntlid": 0, 00:09:06.020 "method": "nvmf_create_subsystem", 00:09:06.020 "req_id": 1 00:09:06.020 } 00:09:06.020 Got JSON-RPC error response 00:09:06.020 response: 00:09:06.020 { 00:09:06.020 "code": -32602, 00:09:06.020 "message": "Invalid cntlid range [1-0]" 00:09:06.020 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:06.020 18:23:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11429 -I 65520 00:09:06.278 [2024-07-15 18:23:51.624096] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11429: invalid cntlid range [1-65520] 00:09:06.278 18:23:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:06.278 { 00:09:06.278 "nqn": "nqn.2016-06.io.spdk:cnode11429", 00:09:06.278 "max_cntlid": 65520, 00:09:06.278 "method": "nvmf_create_subsystem", 00:09:06.278 "req_id": 1 00:09:06.278 } 00:09:06.278 Got JSON-RPC error response 00:09:06.278 response: 00:09:06.278 { 00:09:06.278 "code": -32602, 00:09:06.278 "message": "Invalid cntlid range [1-65520]" 00:09:06.278 }' 00:09:06.278 18:23:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:06.278 { 00:09:06.278 "nqn": "nqn.2016-06.io.spdk:cnode11429", 00:09:06.278 "max_cntlid": 65520, 00:09:06.278 "method": "nvmf_create_subsystem", 00:09:06.278 "req_id": 1 00:09:06.278 } 00:09:06.278 Got JSON-RPC error response 00:09:06.278 response: 00:09:06.278 { 00:09:06.278 "code": -32602, 00:09:06.278 "message": "Invalid cntlid range [1-65520]" 00:09:06.278 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:06.278 18:23:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12130 -i 6 -I 5 00:09:06.278 [2024-07-15 18:23:51.816767] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12130: invalid cntlid range [6-5] 00:09:06.536 18:23:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:06.536 { 00:09:06.536 "nqn": "nqn.2016-06.io.spdk:cnode12130", 00:09:06.536 "min_cntlid": 6, 00:09:06.536 "max_cntlid": 5, 00:09:06.536 "method": "nvmf_create_subsystem", 00:09:06.536 "req_id": 1 00:09:06.536 } 00:09:06.536 Got JSON-RPC error response 00:09:06.536 response: 00:09:06.536 { 00:09:06.536 "code": -32602, 00:09:06.536 "message": "Invalid cntlid range [6-5]" 00:09:06.536 }' 00:09:06.536 18:23:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:06.536 { 00:09:06.536 "nqn": "nqn.2016-06.io.spdk:cnode12130", 00:09:06.536 "min_cntlid": 6, 00:09:06.536 "max_cntlid": 5, 00:09:06.536 "method": "nvmf_create_subsystem", 00:09:06.536 "req_id": 1 00:09:06.536 } 00:09:06.536 Got JSON-RPC error response 00:09:06.536 response: 00:09:06.536 { 00:09:06.536 "code": -32602, 00:09:06.536 "message": "Invalid cntlid range [6-5]" 00:09:06.536 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:06.536 18:23:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:06.536 18:23:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:06.536 { 00:09:06.536 "name": "foobar", 00:09:06.536 "method": "nvmf_delete_target", 00:09:06.536 "req_id": 1 00:09:06.536 } 00:09:06.536 Got JSON-RPC error response 00:09:06.536 response: 00:09:06.536 { 00:09:06.536 "code": -32602, 00:09:06.536 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:06.536 }' 00:09:06.536 18:23:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:06.536 { 00:09:06.536 "name": "foobar", 00:09:06.536 "method": "nvmf_delete_target", 00:09:06.536 "req_id": 1 00:09:06.536 } 00:09:06.536 Got JSON-RPC error response 00:09:06.536 response: 00:09:06.536 { 00:09:06.536 "code": -32602, 00:09:06.536 "message": "The specified target doesn't exist, cannot delete it." 00:09:06.536 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:06.536 18:23:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:06.536 18:23:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:06.536 18:23:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:06.536 18:23:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:06.536 18:23:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:06.536 18:23:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:06.536 18:23:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:06.536 18:23:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:06.536 rmmod nvme_tcp 00:09:06.536 rmmod nvme_fabrics 00:09:06.536 rmmod nvme_keyring 00:09:06.536 18:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:06.536 18:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:06.536 18:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:06.536 18:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3796810 ']' 00:09:06.536 18:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3796810 00:09:06.536 18:23:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 3796810 ']' 00:09:06.536 18:23:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 3796810 00:09:06.536 18:23:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:09:06.536 18:23:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:06.536 18:23:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3796810 00:09:06.536 18:23:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:06.536 18:23:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:06.536 18:23:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3796810' 00:09:06.536 killing process with pid 3796810 00:09:06.536 18:23:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 3796810 00:09:06.536 18:23:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 3796810 00:09:06.794 18:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:06.795 18:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:06.795 18:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:06.795 18:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:06.795 18:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:06.795 18:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.795 18:23:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:06.795 18:23:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.371 18:23:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:09.371 00:09:09.371 real 0m12.082s 00:09:09.371 user 0m19.342s 00:09:09.371 sys 0m5.320s 00:09:09.371 18:23:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:09.371 18:23:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:09.371 ************************************ 00:09:09.371 END TEST nvmf_invalid 00:09:09.371 ************************************ 00:09:09.371 18:23:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:09.371 18:23:54 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:09.371 18:23:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:09.371 18:23:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.371 18:23:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:09.371 ************************************ 00:09:09.371 START TEST nvmf_abort 00:09:09.371 ************************************ 00:09:09.371 18:23:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:09.371 * Looking for test storage... 00:09:09.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:09.372 18:23:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:14.659 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:14.660 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:14.660 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:14.660 Found net devices under 0000:86:00.0: cvl_0_0 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:14.660 Found net devices under 0000:86:00.1: cvl_0_1 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:14.660 18:23:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:14.660 18:24:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:14.660 18:24:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:14.660 18:24:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:14.660 18:24:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:14.660 18:24:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:14.660 18:24:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:14.660 18:24:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:14.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:09:14.660 00:09:14.660 --- 10.0.0.2 ping statistics --- 00:09:14.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.660 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:09:14.660 18:24:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:14.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:09:14.919 00:09:14.919 --- 10.0.0.1 ping statistics --- 00:09:14.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.919 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:09:14.919 18:24:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.919 18:24:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:14.919 18:24:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:14.919 18:24:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.919 18:24:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:14.919 18:24:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:14.919 18:24:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.919 18:24:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:14.919 18:24:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:14.919 18:24:00 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:14.919 18:24:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:14.919 18:24:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:14.919 18:24:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:14.919 18:24:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3801138 00:09:14.919 18:24:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:14.919 18:24:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3801138 00:09:14.919 18:24:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 3801138 ']' 00:09:14.919 18:24:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.919 18:24:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:14.919 18:24:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.919 18:24:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:14.919 18:24:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:14.919 [2024-07-15 18:24:00.310207] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:09:14.919 [2024-07-15 18:24:00.310252] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.919 EAL: No free 2048 kB hugepages reported on node 1 00:09:14.919 [2024-07-15 18:24:00.377743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:14.919 [2024-07-15 18:24:00.457442] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.919 [2024-07-15 18:24:00.457478] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.919 [2024-07-15 18:24:00.457484] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.919 [2024-07-15 18:24:00.457490] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.919 [2024-07-15 18:24:00.457495] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.919 [2024-07-15 18:24:00.457608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.919 [2024-07-15 18:24:00.457724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.919 [2024-07-15 18:24:00.457725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:15.855 [2024-07-15 18:24:01.145424] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:15.855 Malloc0 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:15.855 Delay0 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:15.855 [2024-07-15 18:24:01.213929] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.855 18:24:01 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:15.855 EAL: No free 2048 kB hugepages reported on node 1 00:09:15.855 [2024-07-15 18:24:01.315270] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:18.390 Initializing NVMe Controllers 00:09:18.390 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:18.390 controller IO queue size 128 less than required 00:09:18.390 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:18.390 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:18.390 Initialization complete. Launching workers. 00:09:18.390 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 46802 00:09:18.390 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 46867, failed to submit 62 00:09:18.390 success 46806, unsuccess 61, failed 0 00:09:18.390 18:24:03 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:18.390 18:24:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.390 18:24:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:18.390 18:24:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.390 18:24:03 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:18.390 18:24:03 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:18.390 18:24:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:18.390 18:24:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:18.390 18:24:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:18.390 18:24:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:18.390 18:24:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:18.391 18:24:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:18.391 rmmod nvme_tcp 00:09:18.391 rmmod nvme_fabrics 00:09:18.391 rmmod nvme_keyring 00:09:18.391 18:24:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:18.391 18:24:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:18.391 18:24:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:18.391 18:24:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3801138 ']' 00:09:18.391 18:24:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3801138 00:09:18.391 18:24:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 3801138 ']' 00:09:18.391 18:24:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 3801138 00:09:18.391 18:24:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:09:18.391 18:24:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:18.391 18:24:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3801138 00:09:18.391 18:24:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:18.391 18:24:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:18.391 18:24:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3801138' 00:09:18.391 killing process with pid 3801138 00:09:18.391 18:24:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 3801138 00:09:18.391 18:24:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 3801138 00:09:18.391 18:24:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:18.391 18:24:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:18.391 18:24:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:18.391 18:24:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:18.391 18:24:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:18.391 18:24:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.391 18:24:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.391 18:24:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.296 18:24:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:20.296 00:09:20.296 real 0m11.378s 00:09:20.296 user 0m13.180s 00:09:20.296 sys 0m4.969s 00:09:20.296 18:24:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:20.296 18:24:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.296 ************************************ 00:09:20.296 END TEST nvmf_abort 00:09:20.296 ************************************ 00:09:20.296 18:24:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:20.296 18:24:05 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:20.296 18:24:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:20.296 18:24:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.296 18:24:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:20.296 ************************************ 00:09:20.296 START TEST nvmf_ns_hotplug_stress 00:09:20.296 ************************************ 00:09:20.296 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:20.556 * Looking for test storage... 00:09:20.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:20.556 18:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:27.130 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:27.130 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:27.130 Found net devices under 0000:86:00.0: cvl_0_0 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:27.130 Found net devices under 0000:86:00.1: cvl_0_1 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:27.130 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:27.131 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:27.131 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:27.131 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:27.131 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:27.131 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:27.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:27.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:09:27.131 00:09:27.131 --- 10.0.0.2 ping statistics --- 00:09:27.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.131 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:09:27.131 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:27.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:27.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:09:27.131 00:09:27.131 --- 10.0.0.1 ping statistics --- 00:09:27.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.131 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:09:27.131 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:27.131 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:09:27.131 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:27.131 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:27.131 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:27.131 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:27.131 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:27.131 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:27.131 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:27.131 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:27.131 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:27.131 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:27.131 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.131 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3805697 00:09:27.131 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3805697 00:09:27.131 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:27.131 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 3805697 ']' 00:09:27.131 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.131 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:27.131 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.131 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:27.131 18:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.131 [2024-07-15 18:24:11.784094] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:09:27.131 [2024-07-15 18:24:11.784141] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.131 EAL: No free 2048 kB hugepages reported on node 1 00:09:27.131 [2024-07-15 18:24:11.852799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:27.131 [2024-07-15 18:24:11.928756] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.131 [2024-07-15 18:24:11.928792] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.131 [2024-07-15 18:24:11.928798] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.131 [2024-07-15 18:24:11.928804] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.131 [2024-07-15 18:24:11.928808] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.131 [2024-07-15 18:24:11.928936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.131 [2024-07-15 18:24:11.929052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.131 [2024-07-15 18:24:11.929054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:27.131 18:24:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:27.131 18:24:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:09:27.131 18:24:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:27.131 18:24:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:27.131 18:24:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.131 18:24:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:27.131 18:24:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:27.131 18:24:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:27.388 [2024-07-15 18:24:12.765434] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:27.388 18:24:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:27.646 18:24:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:27.646 [2024-07-15 18:24:13.146779] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:27.646 18:24:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:27.903 18:24:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:28.161 Malloc0 00:09:28.161 18:24:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:28.161 Delay0 00:09:28.418 18:24:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:28.418 18:24:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:28.675 NULL1 00:09:28.675 18:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:28.932 18:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3806033 00:09:28.932 18:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:28.932 18:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:28.932 18:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.932 EAL: No free 2048 kB hugepages reported on node 1 00:09:29.865 Read completed with error (sct=0, sc=11) 00:09:29.865 18:24:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.123 18:24:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:30.123 18:24:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:30.380 true 00:09:30.380 18:24:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:30.380 18:24:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.308 18:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:31.308 18:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:31.308 18:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:31.565 true 00:09:31.565 18:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:31.565 18:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.823 18:24:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:31.823 18:24:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:31.823 18:24:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:32.080 true 00:09:32.080 18:24:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:32.080 18:24:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:33.452 18:24:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:33.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:33.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:33.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:33.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:33.452 18:24:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:33.452 18:24:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:33.452 true 00:09:33.710 18:24:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:33.710 18:24:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:34.641 18:24:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:34.641 18:24:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:34.641 18:24:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:34.898 true 00:09:34.898 18:24:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:34.898 18:24:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.898 18:24:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.155 18:24:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:35.155 18:24:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:35.412 true 00:09:35.412 18:24:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:35.413 18:24:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.413 18:24:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.670 18:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:35.670 18:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:35.927 true 00:09:35.927 18:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:35.927 18:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.927 18:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.184 18:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:36.184 18:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:36.442 true 00:09:36.442 18:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:36.442 18:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.813 18:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.813 18:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:37.813 18:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:37.813 true 00:09:38.070 18:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:38.070 18:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.001 18:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.001 18:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:39.001 18:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:39.001 true 00:09:39.259 18:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:39.259 18:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.259 18:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.515 18:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:39.515 18:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:39.515 true 00:09:39.805 18:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:39.805 18:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.805 18:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:40.103 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:40.103 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:40.103 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:40.103 18:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:40.103 18:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:40.103 true 00:09:40.360 18:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:40.360 18:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.182 18:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.182 18:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:41.182 18:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:41.437 true 00:09:41.437 18:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:41.437 18:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.693 18:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.693 18:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:41.693 18:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:41.950 true 00:09:41.950 18:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:41.950 18:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.206 18:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.463 18:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:42.463 18:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:42.463 true 00:09:42.463 18:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:42.463 18:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.720 18:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.976 18:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:42.976 18:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:42.976 true 00:09:42.976 18:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:42.976 18:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.233 18:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.491 18:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:43.491 18:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:43.491 true 00:09:43.491 18:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:43.491 18:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.749 18:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.007 18:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:44.007 18:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:44.007 true 00:09:44.265 18:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:44.265 18:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.637 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.637 18:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.637 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.637 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.637 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.637 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.637 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.637 18:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:45.637 18:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:45.637 true 00:09:45.637 18:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:45.637 18:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.565 18:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.822 18:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:46.822 18:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:46.822 true 00:09:46.822 18:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:46.822 18:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.080 18:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.337 18:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:47.337 18:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:47.337 true 00:09:47.595 18:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:47.595 18:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.537 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.537 18:24:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.537 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.537 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.537 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.796 18:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:48.796 18:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:49.054 true 00:09:49.054 18:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:49.054 18:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.987 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:49.987 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:49.987 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:49.987 true 00:09:50.244 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:50.244 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.244 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.500 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:50.500 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:50.757 true 00:09:50.757 18:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:50.757 18:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.757 18:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:51.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:51.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:51.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:51.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:51.012 18:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:51.012 18:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:51.268 true 00:09:51.268 18:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:51.268 18:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.200 18:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.200 18:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:52.200 18:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:52.458 true 00:09:52.458 18:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:52.458 18:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.716 18:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.716 18:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:52.716 18:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:52.973 true 00:09:52.973 18:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:52.973 18:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.230 18:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.230 18:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:53.230 18:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:53.488 true 00:09:53.488 18:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:53.488 18:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.766 18:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.766 18:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:53.766 18:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:54.025 true 00:09:54.025 18:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:54.025 18:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.283 18:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.540 18:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:54.540 18:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:54.540 true 00:09:54.540 18:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:54.540 18:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.799 18:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.056 18:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:09:55.056 18:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:09:55.056 true 00:09:55.056 18:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:55.056 18:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.430 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.430 18:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.430 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.430 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.430 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.430 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.430 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.430 18:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:09:56.430 18:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:09:56.688 true 00:09:56.688 18:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:56.688 18:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.622 18:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.622 18:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:09:57.622 18:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:09:57.879 true 00:09:57.879 18:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:57.879 18:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.138 18:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.397 18:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:09:58.397 18:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:09:58.397 true 00:09:58.397 18:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:58.397 18:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.772 Initializing NVMe Controllers 00:09:59.772 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:59.772 Controller IO queue size 128, less than required. 00:09:59.772 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:59.772 Controller IO queue size 128, less than required. 00:09:59.772 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:59.772 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:59.772 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:59.772 Initialization complete. Launching workers. 00:09:59.772 ======================================================== 00:09:59.772 Latency(us) 00:09:59.772 Device Information : IOPS MiB/s Average min max 00:09:59.772 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1757.17 0.86 41818.75 1378.18 1012335.52 00:09:59.772 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15029.03 7.34 8516.73 1586.36 369132.92 00:09:59.772 ======================================================== 00:09:59.772 Total : 16786.20 8.20 12002.76 1378.18 1012335.52 00:09:59.772 00:09:59.772 18:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.772 18:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:09:59.772 18:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:09:59.772 true 00:09:59.772 18:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3806033 00:09:59.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3806033) - No such process 00:09:59.772 18:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3806033 00:09:59.772 18:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.031 18:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:00.289 18:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:00.289 18:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:00.289 18:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:00.289 18:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:00.289 18:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:00.289 null0 00:10:00.548 18:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:00.548 18:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:00.548 18:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:00.548 null1 00:10:00.548 18:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:00.548 18:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:00.548 18:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:00.806 null2 00:10:00.806 18:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:00.806 18:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:00.806 18:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:01.065 null3 00:10:01.065 18:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:01.065 18:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:01.065 18:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:01.065 null4 00:10:01.065 18:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:01.065 18:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:01.065 18:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:01.323 null5 00:10:01.323 18:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:01.323 18:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:01.323 18:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:01.582 null6 00:10:01.582 18:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:01.582 18:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:01.582 18:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:01.582 null7 00:10:01.582 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:01.582 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:01.582 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:01.582 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:01.582 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:01.582 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:01.582 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:01.582 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:01.582 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:01.582 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:01.582 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:01.582 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:01.582 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:01.582 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.582 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:01.582 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:01.582 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:01.582 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:01.582 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:01.582 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:01.582 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:01.582 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:01.582 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.582 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:01.582 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:01.582 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:01.582 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:01.582 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.582 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:01.582 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:01.582 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3811610 3811611 3811612 3811614 3811616 3811619 3811621 3811623 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.583 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:01.847 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.847 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:01.847 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:01.847 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:01.848 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:01.848 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:01.848 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:01.848 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:02.172 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.172 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.172 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:02.172 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.172 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.172 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:02.172 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.172 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.172 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:02.172 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.172 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.172 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.172 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.172 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:02.172 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:02.172 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.172 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.172 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:02.172 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.172 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.172 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:02.172 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.172 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.172 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:02.172 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:02.172 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:02.172 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.172 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:02.172 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:02.172 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:02.172 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:02.172 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:02.431 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.431 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.431 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:02.431 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.431 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.431 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:02.431 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.431 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.431 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:02.431 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.431 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.431 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:02.431 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.431 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.431 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:02.431 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.431 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.431 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:02.431 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.431 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.431 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:02.431 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.431 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.431 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:02.690 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:02.690 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:02.690 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:02.690 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:02.690 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.690 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:02.690 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:02.690 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:02.690 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.948 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.948 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:02.948 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.948 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.948 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:02.948 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.948 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.948 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:02.948 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.948 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.948 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:02.949 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.949 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.949 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:02.949 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.949 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.949 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:02.949 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.949 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.949 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:02.949 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.949 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.949 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:02.949 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:02.949 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:02.949 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.949 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:02.949 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:02.949 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:02.949 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:02.949 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:03.207 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.207 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.207 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:03.207 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.207 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.207 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:03.207 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.207 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.207 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:03.207 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.207 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.207 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.207 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:03.207 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.207 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:03.207 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.207 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.207 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:03.207 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.207 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.207 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:03.207 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.207 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.207 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:03.466 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.466 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:03.466 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:03.466 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:03.466 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:03.466 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:03.466 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:03.466 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:03.466 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.466 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.466 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.466 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.466 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:03.466 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:03.466 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.466 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.466 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:03.726 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.726 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.726 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.726 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.726 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.726 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:03.726 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.726 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:03.726 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:03.726 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.726 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.726 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:03.726 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.726 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.726 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:03.726 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:03.726 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:03.726 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:03.726 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:03.726 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:03.726 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:03.726 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:03.726 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.985 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.985 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.985 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:03.985 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.985 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.986 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:03.986 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.986 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.986 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.986 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:03.986 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.986 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:03.986 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.986 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.986 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:03.986 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.986 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.986 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:03.986 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.986 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.986 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:03.986 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.986 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.986 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:04.245 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:04.245 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:04.245 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:04.245 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:04.245 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.245 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:04.245 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:04.245 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:04.245 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.245 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.245 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:04.245 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.245 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.245 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:04.245 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.245 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.245 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.245 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:04.245 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.245 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:04.503 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.503 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.503 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.503 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.503 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:04.503 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:04.503 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.503 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.504 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:04.504 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.504 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.504 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:04.504 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:04.504 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.504 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:04.504 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:04.504 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:04.504 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:04.504 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:04.504 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:04.761 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.761 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.761 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:04.761 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.761 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.761 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:04.761 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.761 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.761 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:04.761 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.761 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.761 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:04.761 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.761 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.761 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:04.761 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.761 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.761 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:04.761 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.761 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.761 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:04.761 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.761 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.761 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:05.019 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:05.019 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.019 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:05.019 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:05.019 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:05.019 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:05.019 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:05.019 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:05.019 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.019 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.019 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:05.019 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.019 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.019 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:05.019 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.019 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.019 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.019 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.019 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:05.019 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:05.019 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.019 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.019 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:05.019 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.019 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.019 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:05.019 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.019 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.019 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:05.277 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.277 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.277 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:05.277 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:05.277 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:05.277 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:05.277 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:05.277 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:05.277 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:05.277 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.277 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:05.536 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.536 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.536 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.536 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.536 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.536 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.537 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.537 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.537 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.537 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.537 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.537 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.537 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.537 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.537 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.537 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.537 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:05.537 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:05.537 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:05.537 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:05.537 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:05.537 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:05.537 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:05.537 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:05.537 rmmod nvme_tcp 00:10:05.537 rmmod nvme_fabrics 00:10:05.537 rmmod nvme_keyring 00:10:05.537 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:05.537 18:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:05.537 18:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:05.537 18:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3805697 ']' 00:10:05.537 18:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3805697 00:10:05.537 18:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 3805697 ']' 00:10:05.537 18:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 3805697 00:10:05.537 18:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:10:05.537 18:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:05.537 18:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3805697 00:10:05.537 18:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:05.537 18:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:05.537 18:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3805697' 00:10:05.537 killing process with pid 3805697 00:10:05.537 18:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 3805697 00:10:05.537 18:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 3805697 00:10:05.796 18:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:05.796 18:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:05.796 18:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:05.796 18:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:05.796 18:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:05.796 18:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.796 18:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:05.796 18:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.332 18:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:08.332 00:10:08.332 real 0m47.484s 00:10:08.332 user 3m12.947s 00:10:08.332 sys 0m14.773s 00:10:08.332 18:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:08.332 18:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:08.332 ************************************ 00:10:08.332 END TEST nvmf_ns_hotplug_stress 00:10:08.332 ************************************ 00:10:08.332 18:24:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:08.332 18:24:53 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:08.332 18:24:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:08.332 18:24:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:08.332 18:24:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:08.332 ************************************ 00:10:08.332 START TEST nvmf_connect_stress 00:10:08.332 ************************************ 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:08.332 * Looking for test storage... 00:10:08.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:08.332 18:24:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:13.605 18:24:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:13.605 18:24:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:13.605 18:24:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:13.605 18:24:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:13.605 18:24:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:13.605 18:24:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:13.605 18:24:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:13.605 18:24:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:13.605 18:24:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:13.605 18:24:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:13.605 18:24:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:13.605 18:24:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:13.605 18:24:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:13.605 18:24:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:13.605 18:24:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:13.605 18:24:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:13.605 18:24:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:13.605 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:13.605 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:13.605 Found net devices under 0000:86:00.0: cvl_0_0 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:13.605 Found net devices under 0000:86:00.1: cvl_0_1 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:13.605 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:13.864 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:13.864 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:13.864 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:13.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:13.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:10:13.864 00:10:13.864 --- 10.0.0.2 ping statistics --- 00:10:13.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.864 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:10:13.864 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:13.864 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:13.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:10:13.864 00:10:13.864 --- 10.0.0.1 ping statistics --- 00:10:13.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.864 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:10:13.864 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:13.864 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:13.864 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:13.864 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:13.864 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:13.864 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:13.864 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:13.864 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:13.864 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:13.864 18:24:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:13.864 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:13.864 18:24:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:13.864 18:24:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:13.864 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3815983 00:10:13.864 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3815983 00:10:13.864 18:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:13.864 18:24:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 3815983 ']' 00:10:13.864 18:24:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.864 18:24:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:13.864 18:24:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.864 18:24:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:13.864 18:24:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:13.864 [2024-07-15 18:24:59.359724] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:10:13.864 [2024-07-15 18:24:59.359765] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.864 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.122 [2024-07-15 18:24:59.430980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:14.122 [2024-07-15 18:24:59.502587] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.122 [2024-07-15 18:24:59.502627] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.122 [2024-07-15 18:24:59.502634] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:14.122 [2024-07-15 18:24:59.502640] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:14.122 [2024-07-15 18:24:59.502644] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.122 [2024-07-15 18:24:59.502758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.122 [2024-07-15 18:24:59.502884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.122 [2024-07-15 18:24:59.502886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:14.685 18:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:14.685 18:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:10:14.685 18:25:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:14.685 18:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:14.685 18:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:14.685 18:25:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:14.685 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:14.685 18:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.685 18:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:14.685 [2024-07-15 18:25:00.213334] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:14.685 18:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.685 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:14.685 18:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.685 18:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:14.685 18:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.686 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:14.686 18:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.686 18:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:14.943 [2024-07-15 18:25:00.244455] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:14.943 NULL1 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3816115 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:14.943 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816115 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.943 18:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:15.200 18:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.200 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816115 00:10:15.200 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:15.200 18:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.200 18:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:15.456 18:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.456 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816115 00:10:15.456 18:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:15.456 18:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.456 18:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:16.020 18:25:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.020 18:25:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816115 00:10:16.020 18:25:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:16.020 18:25:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.020 18:25:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:16.276 18:25:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.276 18:25:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816115 00:10:16.276 18:25:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:16.276 18:25:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.276 18:25:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:16.533 18:25:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.533 18:25:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816115 00:10:16.533 18:25:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:16.533 18:25:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.533 18:25:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:16.789 18:25:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.789 18:25:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816115 00:10:16.789 18:25:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:16.789 18:25:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.789 18:25:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:17.352 18:25:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.352 18:25:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816115 00:10:17.352 18:25:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:17.352 18:25:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.352 18:25:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:17.609 18:25:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.609 18:25:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816115 00:10:17.609 18:25:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:17.609 18:25:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.609 18:25:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:17.878 18:25:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.878 18:25:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816115 00:10:17.878 18:25:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:17.878 18:25:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.878 18:25:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:18.139 18:25:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.139 18:25:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816115 00:10:18.139 18:25:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:18.139 18:25:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.139 18:25:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:18.397 18:25:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.397 18:25:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816115 00:10:18.397 18:25:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:18.397 18:25:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.397 18:25:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:18.962 18:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.962 18:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816115 00:10:18.962 18:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:18.962 18:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.962 18:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.219 18:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.219 18:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816115 00:10:19.219 18:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:19.219 18:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.219 18:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.477 18:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.477 18:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816115 00:10:19.477 18:25:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:19.477 18:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.477 18:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.734 18:25:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.734 18:25:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816115 00:10:19.734 18:25:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:19.734 18:25:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.734 18:25:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.992 18:25:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.992 18:25:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816115 00:10:19.992 18:25:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:19.992 18:25:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.992 18:25:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.557 18:25:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.557 18:25:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816115 00:10:20.557 18:25:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:20.557 18:25:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.557 18:25:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.815 18:25:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.815 18:25:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816115 00:10:20.815 18:25:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:20.815 18:25:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.815 18:25:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.072 18:25:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.072 18:25:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816115 00:10:21.072 18:25:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.072 18:25:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.072 18:25:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.365 18:25:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.365 18:25:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816115 00:10:21.365 18:25:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.365 18:25:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.365 18:25:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.623 18:25:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.623 18:25:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816115 00:10:21.880 18:25:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.880 18:25:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.880 18:25:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.138 18:25:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.138 18:25:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816115 00:10:22.138 18:25:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.138 18:25:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.138 18:25:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.396 18:25:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.396 18:25:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816115 00:10:22.396 18:25:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.396 18:25:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.396 18:25:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.654 18:25:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.654 18:25:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816115 00:10:22.654 18:25:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.654 18:25:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.654 18:25:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.220 18:25:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.220 18:25:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816115 00:10:23.220 18:25:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:23.220 18:25:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.220 18:25:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.477 18:25:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.477 18:25:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816115 00:10:23.477 18:25:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:23.477 18:25:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.477 18:25:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.734 18:25:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.734 18:25:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816115 00:10:23.734 18:25:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:23.734 18:25:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.734 18:25:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.992 18:25:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.992 18:25:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816115 00:10:23.992 18:25:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:23.992 18:25:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.992 18:25:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.249 18:25:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.249 18:25:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816115 00:10:24.249 18:25:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.249 18:25:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.249 18:25:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.814 18:25:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.814 18:25:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816115 00:10:24.814 18:25:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.814 18:25:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.814 18:25:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.072 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:25.072 18:25:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.072 18:25:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816115 00:10:25.072 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3816115) - No such process 00:10:25.072 18:25:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3816115 00:10:25.072 18:25:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:25.072 18:25:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:25.072 18:25:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:25.072 18:25:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:25.072 18:25:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:25.072 18:25:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:25.072 18:25:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:25.072 18:25:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:25.072 18:25:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:25.072 rmmod nvme_tcp 00:10:25.072 rmmod nvme_fabrics 00:10:25.072 rmmod nvme_keyring 00:10:25.072 18:25:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:25.072 18:25:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:25.072 18:25:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:25.072 18:25:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3815983 ']' 00:10:25.072 18:25:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3815983 00:10:25.072 18:25:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 3815983 ']' 00:10:25.072 18:25:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 3815983 00:10:25.072 18:25:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:10:25.072 18:25:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:25.072 18:25:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3815983 00:10:25.072 18:25:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:25.072 18:25:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:25.072 18:25:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3815983' 00:10:25.072 killing process with pid 3815983 00:10:25.072 18:25:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 3815983 00:10:25.072 18:25:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 3815983 00:10:25.331 18:25:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:25.331 18:25:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:25.331 18:25:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:25.331 18:25:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:25.331 18:25:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:25.331 18:25:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.331 18:25:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:25.331 18:25:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.236 18:25:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:27.236 00:10:27.236 real 0m19.423s 00:10:27.236 user 0m43.245s 00:10:27.236 sys 0m6.452s 00:10:27.498 18:25:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:27.498 18:25:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.498 ************************************ 00:10:27.498 END TEST nvmf_connect_stress 00:10:27.498 ************************************ 00:10:27.498 18:25:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:27.498 18:25:12 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:27.498 18:25:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:27.498 18:25:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:27.498 18:25:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:27.498 ************************************ 00:10:27.498 START TEST nvmf_fused_ordering 00:10:27.498 ************************************ 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:27.498 * Looking for test storage... 00:10:27.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.498 18:25:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:27.499 18:25:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.499 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:27.499 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:27.499 18:25:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:10:27.499 18:25:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:34.107 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:34.107 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:10:34.107 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:34.107 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:34.107 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:34.107 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:34.107 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:34.107 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:10:34.107 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:34.107 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:10:34.107 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:10:34.107 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:10:34.107 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:10:34.107 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:10:34.107 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:10:34.107 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:34.107 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:34.107 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:34.108 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:34.108 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:34.108 Found net devices under 0000:86:00.0: cvl_0_0 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:34.108 Found net devices under 0000:86:00.1: cvl_0_1 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:34.108 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:34.108 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:10:34.108 00:10:34.108 --- 10.0.0.2 ping statistics --- 00:10:34.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.108 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:34.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:34.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:10:34.108 00:10:34.108 --- 10.0.0.1 ping statistics --- 00:10:34.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.108 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3821375 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3821375 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 3821375 ']' 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:34.108 18:25:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:34.108 [2024-07-15 18:25:18.857394] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:10:34.108 [2024-07-15 18:25:18.857437] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:34.108 EAL: No free 2048 kB hugepages reported on node 1 00:10:34.108 [2024-07-15 18:25:18.911787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.108 [2024-07-15 18:25:18.987314] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:34.108 [2024-07-15 18:25:18.987356] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:34.108 [2024-07-15 18:25:18.987364] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:34.108 [2024-07-15 18:25:18.987370] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:34.108 [2024-07-15 18:25:18.987375] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:34.108 [2024-07-15 18:25:18.987410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:34.366 18:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:34.367 18:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:10:34.367 18:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:34.367 18:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:34.367 18:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:34.367 18:25:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:34.367 18:25:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:34.367 18:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.367 18:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:34.367 [2024-07-15 18:25:19.720362] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:34.367 18:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.367 18:25:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:34.367 18:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.367 18:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:34.367 18:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.367 18:25:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:34.367 18:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.367 18:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:34.367 [2024-07-15 18:25:19.740523] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:34.367 18:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.367 18:25:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:34.367 18:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.367 18:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:34.367 NULL1 00:10:34.367 18:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.367 18:25:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:34.367 18:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.367 18:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:34.367 18:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.367 18:25:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:34.367 18:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.367 18:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:34.367 18:25:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.367 18:25:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:34.367 [2024-07-15 18:25:19.795406] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:10:34.367 [2024-07-15 18:25:19.795440] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3821433 ] 00:10:34.367 EAL: No free 2048 kB hugepages reported on node 1 00:10:34.628 Attached to nqn.2016-06.io.spdk:cnode1 00:10:34.628 Namespace ID: 1 size: 1GB 00:10:34.628 fused_ordering(0) 00:10:34.628 fused_ordering(1) 00:10:34.628 fused_ordering(2) 00:10:34.628 fused_ordering(3) 00:10:34.628 fused_ordering(4) 00:10:34.628 fused_ordering(5) 00:10:34.628 fused_ordering(6) 00:10:34.628 fused_ordering(7) 00:10:34.628 fused_ordering(8) 00:10:34.628 fused_ordering(9) 00:10:34.628 fused_ordering(10) 00:10:34.628 fused_ordering(11) 00:10:34.628 fused_ordering(12) 00:10:34.628 fused_ordering(13) 00:10:34.628 fused_ordering(14) 00:10:34.628 fused_ordering(15) 00:10:34.628 fused_ordering(16) 00:10:34.628 fused_ordering(17) 00:10:34.628 fused_ordering(18) 00:10:34.628 fused_ordering(19) 00:10:34.628 fused_ordering(20) 00:10:34.628 fused_ordering(21) 00:10:34.628 fused_ordering(22) 00:10:34.628 fused_ordering(23) 00:10:34.628 fused_ordering(24) 00:10:34.628 fused_ordering(25) 00:10:34.628 fused_ordering(26) 00:10:34.628 fused_ordering(27) 00:10:34.628 fused_ordering(28) 00:10:34.628 fused_ordering(29) 00:10:34.628 fused_ordering(30) 00:10:34.628 fused_ordering(31) 00:10:34.628 fused_ordering(32) 00:10:34.628 fused_ordering(33) 00:10:34.628 fused_ordering(34) 00:10:34.628 fused_ordering(35) 00:10:34.628 fused_ordering(36) 00:10:34.628 fused_ordering(37) 00:10:34.628 fused_ordering(38) 00:10:34.628 fused_ordering(39) 00:10:34.628 fused_ordering(40) 00:10:34.628 fused_ordering(41) 00:10:34.628 fused_ordering(42) 00:10:34.628 fused_ordering(43) 00:10:34.628 fused_ordering(44) 00:10:34.628 fused_ordering(45) 00:10:34.628 fused_ordering(46) 00:10:34.628 fused_ordering(47) 00:10:34.628 fused_ordering(48) 00:10:34.628 fused_ordering(49) 00:10:34.628 fused_ordering(50) 00:10:34.628 fused_ordering(51) 00:10:34.628 fused_ordering(52) 00:10:34.628 fused_ordering(53) 00:10:34.628 fused_ordering(54) 00:10:34.628 fused_ordering(55) 00:10:34.628 fused_ordering(56) 00:10:34.628 fused_ordering(57) 00:10:34.628 fused_ordering(58) 00:10:34.628 fused_ordering(59) 00:10:34.628 fused_ordering(60) 00:10:34.628 fused_ordering(61) 00:10:34.628 fused_ordering(62) 00:10:34.628 fused_ordering(63) 00:10:34.628 fused_ordering(64) 00:10:34.628 fused_ordering(65) 00:10:34.628 fused_ordering(66) 00:10:34.628 fused_ordering(67) 00:10:34.628 fused_ordering(68) 00:10:34.628 fused_ordering(69) 00:10:34.628 fused_ordering(70) 00:10:34.628 fused_ordering(71) 00:10:34.628 fused_ordering(72) 00:10:34.628 fused_ordering(73) 00:10:34.628 fused_ordering(74) 00:10:34.628 fused_ordering(75) 00:10:34.628 fused_ordering(76) 00:10:34.628 fused_ordering(77) 00:10:34.628 fused_ordering(78) 00:10:34.628 fused_ordering(79) 00:10:34.628 fused_ordering(80) 00:10:34.628 fused_ordering(81) 00:10:34.628 fused_ordering(82) 00:10:34.628 fused_ordering(83) 00:10:34.628 fused_ordering(84) 00:10:34.628 fused_ordering(85) 00:10:34.628 fused_ordering(86) 00:10:34.628 fused_ordering(87) 00:10:34.628 fused_ordering(88) 00:10:34.628 fused_ordering(89) 00:10:34.628 fused_ordering(90) 00:10:34.628 fused_ordering(91) 00:10:34.628 fused_ordering(92) 00:10:34.628 fused_ordering(93) 00:10:34.628 fused_ordering(94) 00:10:34.628 fused_ordering(95) 00:10:34.628 fused_ordering(96) 00:10:34.628 fused_ordering(97) 00:10:34.628 fused_ordering(98) 00:10:34.628 fused_ordering(99) 00:10:34.628 fused_ordering(100) 00:10:34.628 fused_ordering(101) 00:10:34.628 fused_ordering(102) 00:10:34.628 fused_ordering(103) 00:10:34.628 fused_ordering(104) 00:10:34.628 fused_ordering(105) 00:10:34.628 fused_ordering(106) 00:10:34.628 fused_ordering(107) 00:10:34.628 fused_ordering(108) 00:10:34.628 fused_ordering(109) 00:10:34.628 fused_ordering(110) 00:10:34.628 fused_ordering(111) 00:10:34.628 fused_ordering(112) 00:10:34.628 fused_ordering(113) 00:10:34.628 fused_ordering(114) 00:10:34.628 fused_ordering(115) 00:10:34.628 fused_ordering(116) 00:10:34.628 fused_ordering(117) 00:10:34.628 fused_ordering(118) 00:10:34.628 fused_ordering(119) 00:10:34.628 fused_ordering(120) 00:10:34.628 fused_ordering(121) 00:10:34.628 fused_ordering(122) 00:10:34.628 fused_ordering(123) 00:10:34.628 fused_ordering(124) 00:10:34.628 fused_ordering(125) 00:10:34.628 fused_ordering(126) 00:10:34.628 fused_ordering(127) 00:10:34.628 fused_ordering(128) 00:10:34.628 fused_ordering(129) 00:10:34.628 fused_ordering(130) 00:10:34.628 fused_ordering(131) 00:10:34.628 fused_ordering(132) 00:10:34.628 fused_ordering(133) 00:10:34.628 fused_ordering(134) 00:10:34.628 fused_ordering(135) 00:10:34.628 fused_ordering(136) 00:10:34.628 fused_ordering(137) 00:10:34.628 fused_ordering(138) 00:10:34.628 fused_ordering(139) 00:10:34.628 fused_ordering(140) 00:10:34.628 fused_ordering(141) 00:10:34.628 fused_ordering(142) 00:10:34.628 fused_ordering(143) 00:10:34.628 fused_ordering(144) 00:10:34.628 fused_ordering(145) 00:10:34.628 fused_ordering(146) 00:10:34.628 fused_ordering(147) 00:10:34.628 fused_ordering(148) 00:10:34.628 fused_ordering(149) 00:10:34.629 fused_ordering(150) 00:10:34.629 fused_ordering(151) 00:10:34.629 fused_ordering(152) 00:10:34.629 fused_ordering(153) 00:10:34.629 fused_ordering(154) 00:10:34.629 fused_ordering(155) 00:10:34.629 fused_ordering(156) 00:10:34.629 fused_ordering(157) 00:10:34.629 fused_ordering(158) 00:10:34.629 fused_ordering(159) 00:10:34.629 fused_ordering(160) 00:10:34.629 fused_ordering(161) 00:10:34.629 fused_ordering(162) 00:10:34.629 fused_ordering(163) 00:10:34.629 fused_ordering(164) 00:10:34.629 fused_ordering(165) 00:10:34.629 fused_ordering(166) 00:10:34.629 fused_ordering(167) 00:10:34.629 fused_ordering(168) 00:10:34.629 fused_ordering(169) 00:10:34.629 fused_ordering(170) 00:10:34.629 fused_ordering(171) 00:10:34.629 fused_ordering(172) 00:10:34.629 fused_ordering(173) 00:10:34.629 fused_ordering(174) 00:10:34.629 fused_ordering(175) 00:10:34.629 fused_ordering(176) 00:10:34.629 fused_ordering(177) 00:10:34.629 fused_ordering(178) 00:10:34.629 fused_ordering(179) 00:10:34.629 fused_ordering(180) 00:10:34.629 fused_ordering(181) 00:10:34.629 fused_ordering(182) 00:10:34.629 fused_ordering(183) 00:10:34.629 fused_ordering(184) 00:10:34.629 fused_ordering(185) 00:10:34.629 fused_ordering(186) 00:10:34.629 fused_ordering(187) 00:10:34.629 fused_ordering(188) 00:10:34.629 fused_ordering(189) 00:10:34.629 fused_ordering(190) 00:10:34.629 fused_ordering(191) 00:10:34.629 fused_ordering(192) 00:10:34.629 fused_ordering(193) 00:10:34.629 fused_ordering(194) 00:10:34.629 fused_ordering(195) 00:10:34.629 fused_ordering(196) 00:10:34.629 fused_ordering(197) 00:10:34.629 fused_ordering(198) 00:10:34.629 fused_ordering(199) 00:10:34.629 fused_ordering(200) 00:10:34.629 fused_ordering(201) 00:10:34.629 fused_ordering(202) 00:10:34.629 fused_ordering(203) 00:10:34.629 fused_ordering(204) 00:10:34.629 fused_ordering(205) 00:10:34.885 fused_ordering(206) 00:10:34.885 fused_ordering(207) 00:10:34.885 fused_ordering(208) 00:10:34.885 fused_ordering(209) 00:10:34.885 fused_ordering(210) 00:10:34.885 fused_ordering(211) 00:10:34.885 fused_ordering(212) 00:10:34.885 fused_ordering(213) 00:10:34.885 fused_ordering(214) 00:10:34.885 fused_ordering(215) 00:10:34.886 fused_ordering(216) 00:10:34.886 fused_ordering(217) 00:10:34.886 fused_ordering(218) 00:10:34.886 fused_ordering(219) 00:10:34.886 fused_ordering(220) 00:10:34.886 fused_ordering(221) 00:10:34.886 fused_ordering(222) 00:10:34.886 fused_ordering(223) 00:10:34.886 fused_ordering(224) 00:10:34.886 fused_ordering(225) 00:10:34.886 fused_ordering(226) 00:10:34.886 fused_ordering(227) 00:10:34.886 fused_ordering(228) 00:10:34.886 fused_ordering(229) 00:10:34.886 fused_ordering(230) 00:10:34.886 fused_ordering(231) 00:10:34.886 fused_ordering(232) 00:10:34.886 fused_ordering(233) 00:10:34.886 fused_ordering(234) 00:10:34.886 fused_ordering(235) 00:10:34.886 fused_ordering(236) 00:10:34.886 fused_ordering(237) 00:10:34.886 fused_ordering(238) 00:10:34.886 fused_ordering(239) 00:10:34.886 fused_ordering(240) 00:10:34.886 fused_ordering(241) 00:10:34.886 fused_ordering(242) 00:10:34.886 fused_ordering(243) 00:10:34.886 fused_ordering(244) 00:10:34.886 fused_ordering(245) 00:10:34.886 fused_ordering(246) 00:10:34.886 fused_ordering(247) 00:10:34.886 fused_ordering(248) 00:10:34.886 fused_ordering(249) 00:10:34.886 fused_ordering(250) 00:10:34.886 fused_ordering(251) 00:10:34.886 fused_ordering(252) 00:10:34.886 fused_ordering(253) 00:10:34.886 fused_ordering(254) 00:10:34.886 fused_ordering(255) 00:10:34.886 fused_ordering(256) 00:10:34.886 fused_ordering(257) 00:10:34.886 fused_ordering(258) 00:10:34.886 fused_ordering(259) 00:10:34.886 fused_ordering(260) 00:10:34.886 fused_ordering(261) 00:10:34.886 fused_ordering(262) 00:10:34.886 fused_ordering(263) 00:10:34.886 fused_ordering(264) 00:10:34.886 fused_ordering(265) 00:10:34.886 fused_ordering(266) 00:10:34.886 fused_ordering(267) 00:10:34.886 fused_ordering(268) 00:10:34.886 fused_ordering(269) 00:10:34.886 fused_ordering(270) 00:10:34.886 fused_ordering(271) 00:10:34.886 fused_ordering(272) 00:10:34.886 fused_ordering(273) 00:10:34.886 fused_ordering(274) 00:10:34.886 fused_ordering(275) 00:10:34.886 fused_ordering(276) 00:10:34.886 fused_ordering(277) 00:10:34.886 fused_ordering(278) 00:10:34.886 fused_ordering(279) 00:10:34.886 fused_ordering(280) 00:10:34.886 fused_ordering(281) 00:10:34.886 fused_ordering(282) 00:10:34.886 fused_ordering(283) 00:10:34.886 fused_ordering(284) 00:10:34.886 fused_ordering(285) 00:10:34.886 fused_ordering(286) 00:10:34.886 fused_ordering(287) 00:10:34.886 fused_ordering(288) 00:10:34.886 fused_ordering(289) 00:10:34.886 fused_ordering(290) 00:10:34.886 fused_ordering(291) 00:10:34.886 fused_ordering(292) 00:10:34.886 fused_ordering(293) 00:10:34.886 fused_ordering(294) 00:10:34.886 fused_ordering(295) 00:10:34.886 fused_ordering(296) 00:10:34.886 fused_ordering(297) 00:10:34.886 fused_ordering(298) 00:10:34.886 fused_ordering(299) 00:10:34.886 fused_ordering(300) 00:10:34.886 fused_ordering(301) 00:10:34.886 fused_ordering(302) 00:10:34.886 fused_ordering(303) 00:10:34.886 fused_ordering(304) 00:10:34.886 fused_ordering(305) 00:10:34.886 fused_ordering(306) 00:10:34.886 fused_ordering(307) 00:10:34.886 fused_ordering(308) 00:10:34.886 fused_ordering(309) 00:10:34.886 fused_ordering(310) 00:10:34.886 fused_ordering(311) 00:10:34.886 fused_ordering(312) 00:10:34.886 fused_ordering(313) 00:10:34.886 fused_ordering(314) 00:10:34.886 fused_ordering(315) 00:10:34.886 fused_ordering(316) 00:10:34.886 fused_ordering(317) 00:10:34.886 fused_ordering(318) 00:10:34.886 fused_ordering(319) 00:10:34.886 fused_ordering(320) 00:10:34.886 fused_ordering(321) 00:10:34.886 fused_ordering(322) 00:10:34.886 fused_ordering(323) 00:10:34.886 fused_ordering(324) 00:10:34.886 fused_ordering(325) 00:10:34.886 fused_ordering(326) 00:10:34.886 fused_ordering(327) 00:10:34.886 fused_ordering(328) 00:10:34.886 fused_ordering(329) 00:10:34.886 fused_ordering(330) 00:10:34.886 fused_ordering(331) 00:10:34.886 fused_ordering(332) 00:10:34.886 fused_ordering(333) 00:10:34.886 fused_ordering(334) 00:10:34.886 fused_ordering(335) 00:10:34.886 fused_ordering(336) 00:10:34.886 fused_ordering(337) 00:10:34.886 fused_ordering(338) 00:10:34.886 fused_ordering(339) 00:10:34.886 fused_ordering(340) 00:10:34.886 fused_ordering(341) 00:10:34.886 fused_ordering(342) 00:10:34.886 fused_ordering(343) 00:10:34.886 fused_ordering(344) 00:10:34.886 fused_ordering(345) 00:10:34.886 fused_ordering(346) 00:10:34.886 fused_ordering(347) 00:10:34.886 fused_ordering(348) 00:10:34.886 fused_ordering(349) 00:10:34.886 fused_ordering(350) 00:10:34.886 fused_ordering(351) 00:10:34.886 fused_ordering(352) 00:10:34.886 fused_ordering(353) 00:10:34.886 fused_ordering(354) 00:10:34.886 fused_ordering(355) 00:10:34.886 fused_ordering(356) 00:10:34.886 fused_ordering(357) 00:10:34.886 fused_ordering(358) 00:10:34.886 fused_ordering(359) 00:10:34.886 fused_ordering(360) 00:10:34.886 fused_ordering(361) 00:10:34.886 fused_ordering(362) 00:10:34.886 fused_ordering(363) 00:10:34.886 fused_ordering(364) 00:10:34.886 fused_ordering(365) 00:10:34.886 fused_ordering(366) 00:10:34.886 fused_ordering(367) 00:10:34.886 fused_ordering(368) 00:10:34.886 fused_ordering(369) 00:10:34.886 fused_ordering(370) 00:10:34.886 fused_ordering(371) 00:10:34.886 fused_ordering(372) 00:10:34.886 fused_ordering(373) 00:10:34.886 fused_ordering(374) 00:10:34.886 fused_ordering(375) 00:10:34.886 fused_ordering(376) 00:10:34.886 fused_ordering(377) 00:10:34.886 fused_ordering(378) 00:10:34.886 fused_ordering(379) 00:10:34.886 fused_ordering(380) 00:10:34.886 fused_ordering(381) 00:10:34.886 fused_ordering(382) 00:10:34.886 fused_ordering(383) 00:10:34.886 fused_ordering(384) 00:10:34.886 fused_ordering(385) 00:10:34.886 fused_ordering(386) 00:10:34.886 fused_ordering(387) 00:10:34.886 fused_ordering(388) 00:10:34.886 fused_ordering(389) 00:10:34.886 fused_ordering(390) 00:10:34.886 fused_ordering(391) 00:10:34.886 fused_ordering(392) 00:10:34.886 fused_ordering(393) 00:10:34.886 fused_ordering(394) 00:10:34.886 fused_ordering(395) 00:10:34.886 fused_ordering(396) 00:10:34.886 fused_ordering(397) 00:10:34.886 fused_ordering(398) 00:10:34.886 fused_ordering(399) 00:10:34.886 fused_ordering(400) 00:10:34.886 fused_ordering(401) 00:10:34.886 fused_ordering(402) 00:10:34.886 fused_ordering(403) 00:10:34.886 fused_ordering(404) 00:10:34.886 fused_ordering(405) 00:10:34.886 fused_ordering(406) 00:10:34.886 fused_ordering(407) 00:10:34.886 fused_ordering(408) 00:10:34.886 fused_ordering(409) 00:10:34.886 fused_ordering(410) 00:10:35.143 fused_ordering(411) 00:10:35.143 fused_ordering(412) 00:10:35.143 fused_ordering(413) 00:10:35.143 fused_ordering(414) 00:10:35.143 fused_ordering(415) 00:10:35.143 fused_ordering(416) 00:10:35.143 fused_ordering(417) 00:10:35.143 fused_ordering(418) 00:10:35.143 fused_ordering(419) 00:10:35.143 fused_ordering(420) 00:10:35.143 fused_ordering(421) 00:10:35.143 fused_ordering(422) 00:10:35.143 fused_ordering(423) 00:10:35.143 fused_ordering(424) 00:10:35.143 fused_ordering(425) 00:10:35.143 fused_ordering(426) 00:10:35.143 fused_ordering(427) 00:10:35.143 fused_ordering(428) 00:10:35.143 fused_ordering(429) 00:10:35.143 fused_ordering(430) 00:10:35.143 fused_ordering(431) 00:10:35.143 fused_ordering(432) 00:10:35.143 fused_ordering(433) 00:10:35.143 fused_ordering(434) 00:10:35.143 fused_ordering(435) 00:10:35.143 fused_ordering(436) 00:10:35.143 fused_ordering(437) 00:10:35.143 fused_ordering(438) 00:10:35.143 fused_ordering(439) 00:10:35.143 fused_ordering(440) 00:10:35.143 fused_ordering(441) 00:10:35.143 fused_ordering(442) 00:10:35.143 fused_ordering(443) 00:10:35.143 fused_ordering(444) 00:10:35.143 fused_ordering(445) 00:10:35.143 fused_ordering(446) 00:10:35.143 fused_ordering(447) 00:10:35.143 fused_ordering(448) 00:10:35.143 fused_ordering(449) 00:10:35.143 fused_ordering(450) 00:10:35.143 fused_ordering(451) 00:10:35.143 fused_ordering(452) 00:10:35.143 fused_ordering(453) 00:10:35.143 fused_ordering(454) 00:10:35.143 fused_ordering(455) 00:10:35.143 fused_ordering(456) 00:10:35.143 fused_ordering(457) 00:10:35.143 fused_ordering(458) 00:10:35.143 fused_ordering(459) 00:10:35.143 fused_ordering(460) 00:10:35.143 fused_ordering(461) 00:10:35.143 fused_ordering(462) 00:10:35.143 fused_ordering(463) 00:10:35.143 fused_ordering(464) 00:10:35.143 fused_ordering(465) 00:10:35.143 fused_ordering(466) 00:10:35.143 fused_ordering(467) 00:10:35.143 fused_ordering(468) 00:10:35.143 fused_ordering(469) 00:10:35.143 fused_ordering(470) 00:10:35.143 fused_ordering(471) 00:10:35.143 fused_ordering(472) 00:10:35.143 fused_ordering(473) 00:10:35.143 fused_ordering(474) 00:10:35.143 fused_ordering(475) 00:10:35.143 fused_ordering(476) 00:10:35.143 fused_ordering(477) 00:10:35.143 fused_ordering(478) 00:10:35.143 fused_ordering(479) 00:10:35.143 fused_ordering(480) 00:10:35.143 fused_ordering(481) 00:10:35.143 fused_ordering(482) 00:10:35.143 fused_ordering(483) 00:10:35.143 fused_ordering(484) 00:10:35.143 fused_ordering(485) 00:10:35.143 fused_ordering(486) 00:10:35.143 fused_ordering(487) 00:10:35.143 fused_ordering(488) 00:10:35.143 fused_ordering(489) 00:10:35.143 fused_ordering(490) 00:10:35.143 fused_ordering(491) 00:10:35.143 fused_ordering(492) 00:10:35.143 fused_ordering(493) 00:10:35.143 fused_ordering(494) 00:10:35.143 fused_ordering(495) 00:10:35.143 fused_ordering(496) 00:10:35.143 fused_ordering(497) 00:10:35.143 fused_ordering(498) 00:10:35.143 fused_ordering(499) 00:10:35.143 fused_ordering(500) 00:10:35.143 fused_ordering(501) 00:10:35.143 fused_ordering(502) 00:10:35.143 fused_ordering(503) 00:10:35.143 fused_ordering(504) 00:10:35.143 fused_ordering(505) 00:10:35.143 fused_ordering(506) 00:10:35.143 fused_ordering(507) 00:10:35.143 fused_ordering(508) 00:10:35.143 fused_ordering(509) 00:10:35.143 fused_ordering(510) 00:10:35.143 fused_ordering(511) 00:10:35.143 fused_ordering(512) 00:10:35.143 fused_ordering(513) 00:10:35.143 fused_ordering(514) 00:10:35.143 fused_ordering(515) 00:10:35.143 fused_ordering(516) 00:10:35.143 fused_ordering(517) 00:10:35.143 fused_ordering(518) 00:10:35.143 fused_ordering(519) 00:10:35.143 fused_ordering(520) 00:10:35.143 fused_ordering(521) 00:10:35.143 fused_ordering(522) 00:10:35.143 fused_ordering(523) 00:10:35.143 fused_ordering(524) 00:10:35.143 fused_ordering(525) 00:10:35.143 fused_ordering(526) 00:10:35.143 fused_ordering(527) 00:10:35.143 fused_ordering(528) 00:10:35.143 fused_ordering(529) 00:10:35.143 fused_ordering(530) 00:10:35.143 fused_ordering(531) 00:10:35.143 fused_ordering(532) 00:10:35.143 fused_ordering(533) 00:10:35.143 fused_ordering(534) 00:10:35.143 fused_ordering(535) 00:10:35.143 fused_ordering(536) 00:10:35.143 fused_ordering(537) 00:10:35.143 fused_ordering(538) 00:10:35.143 fused_ordering(539) 00:10:35.143 fused_ordering(540) 00:10:35.143 fused_ordering(541) 00:10:35.143 fused_ordering(542) 00:10:35.143 fused_ordering(543) 00:10:35.143 fused_ordering(544) 00:10:35.143 fused_ordering(545) 00:10:35.143 fused_ordering(546) 00:10:35.143 fused_ordering(547) 00:10:35.143 fused_ordering(548) 00:10:35.143 fused_ordering(549) 00:10:35.143 fused_ordering(550) 00:10:35.143 fused_ordering(551) 00:10:35.143 fused_ordering(552) 00:10:35.143 fused_ordering(553) 00:10:35.143 fused_ordering(554) 00:10:35.143 fused_ordering(555) 00:10:35.143 fused_ordering(556) 00:10:35.143 fused_ordering(557) 00:10:35.143 fused_ordering(558) 00:10:35.143 fused_ordering(559) 00:10:35.143 fused_ordering(560) 00:10:35.143 fused_ordering(561) 00:10:35.143 fused_ordering(562) 00:10:35.143 fused_ordering(563) 00:10:35.143 fused_ordering(564) 00:10:35.143 fused_ordering(565) 00:10:35.143 fused_ordering(566) 00:10:35.143 fused_ordering(567) 00:10:35.143 fused_ordering(568) 00:10:35.143 fused_ordering(569) 00:10:35.143 fused_ordering(570) 00:10:35.143 fused_ordering(571) 00:10:35.143 fused_ordering(572) 00:10:35.143 fused_ordering(573) 00:10:35.143 fused_ordering(574) 00:10:35.143 fused_ordering(575) 00:10:35.143 fused_ordering(576) 00:10:35.143 fused_ordering(577) 00:10:35.143 fused_ordering(578) 00:10:35.143 fused_ordering(579) 00:10:35.143 fused_ordering(580) 00:10:35.143 fused_ordering(581) 00:10:35.143 fused_ordering(582) 00:10:35.143 fused_ordering(583) 00:10:35.143 fused_ordering(584) 00:10:35.143 fused_ordering(585) 00:10:35.143 fused_ordering(586) 00:10:35.143 fused_ordering(587) 00:10:35.143 fused_ordering(588) 00:10:35.143 fused_ordering(589) 00:10:35.143 fused_ordering(590) 00:10:35.143 fused_ordering(591) 00:10:35.143 fused_ordering(592) 00:10:35.143 fused_ordering(593) 00:10:35.143 fused_ordering(594) 00:10:35.143 fused_ordering(595) 00:10:35.143 fused_ordering(596) 00:10:35.143 fused_ordering(597) 00:10:35.143 fused_ordering(598) 00:10:35.143 fused_ordering(599) 00:10:35.143 fused_ordering(600) 00:10:35.143 fused_ordering(601) 00:10:35.143 fused_ordering(602) 00:10:35.143 fused_ordering(603) 00:10:35.143 fused_ordering(604) 00:10:35.143 fused_ordering(605) 00:10:35.143 fused_ordering(606) 00:10:35.143 fused_ordering(607) 00:10:35.143 fused_ordering(608) 00:10:35.143 fused_ordering(609) 00:10:35.143 fused_ordering(610) 00:10:35.143 fused_ordering(611) 00:10:35.143 fused_ordering(612) 00:10:35.143 fused_ordering(613) 00:10:35.143 fused_ordering(614) 00:10:35.143 fused_ordering(615) 00:10:35.402 fused_ordering(616) 00:10:35.402 fused_ordering(617) 00:10:35.402 fused_ordering(618) 00:10:35.402 fused_ordering(619) 00:10:35.402 fused_ordering(620) 00:10:35.402 fused_ordering(621) 00:10:35.402 fused_ordering(622) 00:10:35.402 fused_ordering(623) 00:10:35.402 fused_ordering(624) 00:10:35.402 fused_ordering(625) 00:10:35.402 fused_ordering(626) 00:10:35.402 fused_ordering(627) 00:10:35.402 fused_ordering(628) 00:10:35.402 fused_ordering(629) 00:10:35.402 fused_ordering(630) 00:10:35.402 fused_ordering(631) 00:10:35.402 fused_ordering(632) 00:10:35.402 fused_ordering(633) 00:10:35.402 fused_ordering(634) 00:10:35.402 fused_ordering(635) 00:10:35.402 fused_ordering(636) 00:10:35.402 fused_ordering(637) 00:10:35.402 fused_ordering(638) 00:10:35.402 fused_ordering(639) 00:10:35.402 fused_ordering(640) 00:10:35.402 fused_ordering(641) 00:10:35.402 fused_ordering(642) 00:10:35.402 fused_ordering(643) 00:10:35.402 fused_ordering(644) 00:10:35.402 fused_ordering(645) 00:10:35.402 fused_ordering(646) 00:10:35.402 fused_ordering(647) 00:10:35.402 fused_ordering(648) 00:10:35.402 fused_ordering(649) 00:10:35.402 fused_ordering(650) 00:10:35.402 fused_ordering(651) 00:10:35.402 fused_ordering(652) 00:10:35.402 fused_ordering(653) 00:10:35.402 fused_ordering(654) 00:10:35.402 fused_ordering(655) 00:10:35.402 fused_ordering(656) 00:10:35.402 fused_ordering(657) 00:10:35.402 fused_ordering(658) 00:10:35.402 fused_ordering(659) 00:10:35.402 fused_ordering(660) 00:10:35.402 fused_ordering(661) 00:10:35.402 fused_ordering(662) 00:10:35.402 fused_ordering(663) 00:10:35.402 fused_ordering(664) 00:10:35.402 fused_ordering(665) 00:10:35.402 fused_ordering(666) 00:10:35.402 fused_ordering(667) 00:10:35.402 fused_ordering(668) 00:10:35.402 fused_ordering(669) 00:10:35.402 fused_ordering(670) 00:10:35.402 fused_ordering(671) 00:10:35.402 fused_ordering(672) 00:10:35.402 fused_ordering(673) 00:10:35.402 fused_ordering(674) 00:10:35.402 fused_ordering(675) 00:10:35.402 fused_ordering(676) 00:10:35.402 fused_ordering(677) 00:10:35.402 fused_ordering(678) 00:10:35.402 fused_ordering(679) 00:10:35.402 fused_ordering(680) 00:10:35.402 fused_ordering(681) 00:10:35.402 fused_ordering(682) 00:10:35.402 fused_ordering(683) 00:10:35.402 fused_ordering(684) 00:10:35.402 fused_ordering(685) 00:10:35.402 fused_ordering(686) 00:10:35.402 fused_ordering(687) 00:10:35.402 fused_ordering(688) 00:10:35.402 fused_ordering(689) 00:10:35.402 fused_ordering(690) 00:10:35.402 fused_ordering(691) 00:10:35.402 fused_ordering(692) 00:10:35.402 fused_ordering(693) 00:10:35.402 fused_ordering(694) 00:10:35.402 fused_ordering(695) 00:10:35.402 fused_ordering(696) 00:10:35.402 fused_ordering(697) 00:10:35.402 fused_ordering(698) 00:10:35.402 fused_ordering(699) 00:10:35.402 fused_ordering(700) 00:10:35.402 fused_ordering(701) 00:10:35.402 fused_ordering(702) 00:10:35.402 fused_ordering(703) 00:10:35.402 fused_ordering(704) 00:10:35.402 fused_ordering(705) 00:10:35.402 fused_ordering(706) 00:10:35.402 fused_ordering(707) 00:10:35.402 fused_ordering(708) 00:10:35.402 fused_ordering(709) 00:10:35.402 fused_ordering(710) 00:10:35.402 fused_ordering(711) 00:10:35.402 fused_ordering(712) 00:10:35.402 fused_ordering(713) 00:10:35.402 fused_ordering(714) 00:10:35.402 fused_ordering(715) 00:10:35.402 fused_ordering(716) 00:10:35.402 fused_ordering(717) 00:10:35.402 fused_ordering(718) 00:10:35.402 fused_ordering(719) 00:10:35.402 fused_ordering(720) 00:10:35.402 fused_ordering(721) 00:10:35.402 fused_ordering(722) 00:10:35.402 fused_ordering(723) 00:10:35.402 fused_ordering(724) 00:10:35.402 fused_ordering(725) 00:10:35.402 fused_ordering(726) 00:10:35.402 fused_ordering(727) 00:10:35.402 fused_ordering(728) 00:10:35.402 fused_ordering(729) 00:10:35.402 fused_ordering(730) 00:10:35.402 fused_ordering(731) 00:10:35.402 fused_ordering(732) 00:10:35.402 fused_ordering(733) 00:10:35.402 fused_ordering(734) 00:10:35.402 fused_ordering(735) 00:10:35.402 fused_ordering(736) 00:10:35.402 fused_ordering(737) 00:10:35.402 fused_ordering(738) 00:10:35.402 fused_ordering(739) 00:10:35.402 fused_ordering(740) 00:10:35.402 fused_ordering(741) 00:10:35.402 fused_ordering(742) 00:10:35.402 fused_ordering(743) 00:10:35.402 fused_ordering(744) 00:10:35.402 fused_ordering(745) 00:10:35.402 fused_ordering(746) 00:10:35.402 fused_ordering(747) 00:10:35.402 fused_ordering(748) 00:10:35.402 fused_ordering(749) 00:10:35.402 fused_ordering(750) 00:10:35.402 fused_ordering(751) 00:10:35.402 fused_ordering(752) 00:10:35.402 fused_ordering(753) 00:10:35.402 fused_ordering(754) 00:10:35.402 fused_ordering(755) 00:10:35.402 fused_ordering(756) 00:10:35.402 fused_ordering(757) 00:10:35.402 fused_ordering(758) 00:10:35.402 fused_ordering(759) 00:10:35.402 fused_ordering(760) 00:10:35.402 fused_ordering(761) 00:10:35.402 fused_ordering(762) 00:10:35.402 fused_ordering(763) 00:10:35.402 fused_ordering(764) 00:10:35.402 fused_ordering(765) 00:10:35.402 fused_ordering(766) 00:10:35.402 fused_ordering(767) 00:10:35.402 fused_ordering(768) 00:10:35.402 fused_ordering(769) 00:10:35.402 fused_ordering(770) 00:10:35.402 fused_ordering(771) 00:10:35.402 fused_ordering(772) 00:10:35.402 fused_ordering(773) 00:10:35.402 fused_ordering(774) 00:10:35.402 fused_ordering(775) 00:10:35.402 fused_ordering(776) 00:10:35.402 fused_ordering(777) 00:10:35.402 fused_ordering(778) 00:10:35.402 fused_ordering(779) 00:10:35.402 fused_ordering(780) 00:10:35.402 fused_ordering(781) 00:10:35.402 fused_ordering(782) 00:10:35.402 fused_ordering(783) 00:10:35.402 fused_ordering(784) 00:10:35.402 fused_ordering(785) 00:10:35.402 fused_ordering(786) 00:10:35.402 fused_ordering(787) 00:10:35.402 fused_ordering(788) 00:10:35.402 fused_ordering(789) 00:10:35.402 fused_ordering(790) 00:10:35.402 fused_ordering(791) 00:10:35.402 fused_ordering(792) 00:10:35.402 fused_ordering(793) 00:10:35.402 fused_ordering(794) 00:10:35.402 fused_ordering(795) 00:10:35.402 fused_ordering(796) 00:10:35.402 fused_ordering(797) 00:10:35.402 fused_ordering(798) 00:10:35.402 fused_ordering(799) 00:10:35.402 fused_ordering(800) 00:10:35.402 fused_ordering(801) 00:10:35.402 fused_ordering(802) 00:10:35.402 fused_ordering(803) 00:10:35.402 fused_ordering(804) 00:10:35.402 fused_ordering(805) 00:10:35.402 fused_ordering(806) 00:10:35.402 fused_ordering(807) 00:10:35.402 fused_ordering(808) 00:10:35.402 fused_ordering(809) 00:10:35.402 fused_ordering(810) 00:10:35.402 fused_ordering(811) 00:10:35.402 fused_ordering(812) 00:10:35.402 fused_ordering(813) 00:10:35.402 fused_ordering(814) 00:10:35.402 fused_ordering(815) 00:10:35.402 fused_ordering(816) 00:10:35.403 fused_ordering(817) 00:10:35.403 fused_ordering(818) 00:10:35.403 fused_ordering(819) 00:10:35.403 fused_ordering(820) 00:10:35.981 fused_o[2024-07-15 18:25:21.301049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f8460 is same with the state(5) to be set 00:10:35.981 rdering(821) 00:10:35.981 fused_ordering(822) 00:10:35.981 fused_ordering(823) 00:10:35.981 fused_ordering(824) 00:10:35.981 fused_ordering(825) 00:10:35.981 fused_ordering(826) 00:10:35.981 fused_ordering(827) 00:10:35.981 fused_ordering(828) 00:10:35.981 fused_ordering(829) 00:10:35.981 fused_ordering(830) 00:10:35.981 fused_ordering(831) 00:10:35.981 fused_ordering(832) 00:10:35.981 fused_ordering(833) 00:10:35.981 fused_ordering(834) 00:10:35.981 fused_ordering(835) 00:10:35.981 fused_ordering(836) 00:10:35.981 fused_ordering(837) 00:10:35.981 fused_ordering(838) 00:10:35.981 fused_ordering(839) 00:10:35.981 fused_ordering(840) 00:10:35.981 fused_ordering(841) 00:10:35.981 fused_ordering(842) 00:10:35.981 fused_ordering(843) 00:10:35.981 fused_ordering(844) 00:10:35.981 fused_ordering(845) 00:10:35.981 fused_ordering(846) 00:10:35.981 fused_ordering(847) 00:10:35.981 fused_ordering(848) 00:10:35.981 fused_ordering(849) 00:10:35.981 fused_ordering(850) 00:10:35.981 fused_ordering(851) 00:10:35.981 fused_ordering(852) 00:10:35.981 fused_ordering(853) 00:10:35.981 fused_ordering(854) 00:10:35.981 fused_ordering(855) 00:10:35.981 fused_ordering(856) 00:10:35.981 fused_ordering(857) 00:10:35.981 fused_ordering(858) 00:10:35.981 fused_ordering(859) 00:10:35.981 fused_ordering(860) 00:10:35.981 fused_ordering(861) 00:10:35.981 fused_ordering(862) 00:10:35.981 fused_ordering(863) 00:10:35.981 fused_ordering(864) 00:10:35.981 fused_ordering(865) 00:10:35.981 fused_ordering(866) 00:10:35.981 fused_ordering(867) 00:10:35.981 fused_ordering(868) 00:10:35.981 fused_ordering(869) 00:10:35.981 fused_ordering(870) 00:10:35.981 fused_ordering(871) 00:10:35.981 fused_ordering(872) 00:10:35.981 fused_ordering(873) 00:10:35.981 fused_ordering(874) 00:10:35.981 fused_ordering(875) 00:10:35.981 fused_ordering(876) 00:10:35.981 fused_ordering(877) 00:10:35.981 fused_ordering(878) 00:10:35.981 fused_ordering(879) 00:10:35.981 fused_ordering(880) 00:10:35.981 fused_ordering(881) 00:10:35.981 fused_ordering(882) 00:10:35.981 fused_ordering(883) 00:10:35.981 fused_ordering(884) 00:10:35.981 fused_ordering(885) 00:10:35.981 fused_ordering(886) 00:10:35.981 fused_ordering(887) 00:10:35.981 fused_ordering(888) 00:10:35.981 fused_ordering(889) 00:10:35.981 fused_ordering(890) 00:10:35.981 fused_ordering(891) 00:10:35.981 fused_ordering(892) 00:10:35.981 fused_ordering(893) 00:10:35.981 fused_ordering(894) 00:10:35.981 fused_ordering(895) 00:10:35.981 fused_ordering(896) 00:10:35.981 fused_ordering(897) 00:10:35.981 fused_ordering(898) 00:10:35.981 fused_ordering(899) 00:10:35.981 fused_ordering(900) 00:10:35.981 fused_ordering(901) 00:10:35.981 fused_ordering(902) 00:10:35.981 fused_ordering(903) 00:10:35.981 fused_ordering(904) 00:10:35.981 fused_ordering(905) 00:10:35.981 fused_ordering(906) 00:10:35.981 fused_ordering(907) 00:10:35.981 fused_ordering(908) 00:10:35.981 fused_ordering(909) 00:10:35.981 fused_ordering(910) 00:10:35.981 fused_ordering(911) 00:10:35.981 fused_ordering(912) 00:10:35.981 fused_ordering(913) 00:10:35.981 fused_ordering(914) 00:10:35.981 fused_ordering(915) 00:10:35.981 fused_ordering(916) 00:10:35.981 fused_ordering(917) 00:10:35.981 fused_ordering(918) 00:10:35.981 fused_ordering(919) 00:10:35.981 fused_ordering(920) 00:10:35.981 fused_ordering(921) 00:10:35.981 fused_ordering(922) 00:10:35.981 fused_ordering(923) 00:10:35.981 fused_ordering(924) 00:10:35.981 fused_ordering(925) 00:10:35.981 fused_ordering(926) 00:10:35.981 fused_ordering(927) 00:10:35.981 fused_ordering(928) 00:10:35.981 fused_ordering(929) 00:10:35.981 fused_ordering(930) 00:10:35.981 fused_ordering(931) 00:10:35.981 fused_ordering(932) 00:10:35.981 fused_ordering(933) 00:10:35.981 fused_ordering(934) 00:10:35.981 fused_ordering(935) 00:10:35.981 fused_ordering(936) 00:10:35.981 fused_ordering(937) 00:10:35.981 fused_ordering(938) 00:10:35.981 fused_ordering(939) 00:10:35.981 fused_ordering(940) 00:10:35.981 fused_ordering(941) 00:10:35.981 fused_ordering(942) 00:10:35.981 fused_ordering(943) 00:10:35.981 fused_ordering(944) 00:10:35.981 fused_ordering(945) 00:10:35.981 fused_ordering(946) 00:10:35.981 fused_ordering(947) 00:10:35.981 fused_ordering(948) 00:10:35.981 fused_ordering(949) 00:10:35.981 fused_ordering(950) 00:10:35.981 fused_ordering(951) 00:10:35.981 fused_ordering(952) 00:10:35.981 fused_ordering(953) 00:10:35.981 fused_ordering(954) 00:10:35.981 fused_ordering(955) 00:10:35.981 fused_ordering(956) 00:10:35.981 fused_ordering(957) 00:10:35.981 fused_ordering(958) 00:10:35.981 fused_ordering(959) 00:10:35.981 fused_ordering(960) 00:10:35.981 fused_ordering(961) 00:10:35.981 fused_ordering(962) 00:10:35.981 fused_ordering(963) 00:10:35.981 fused_ordering(964) 00:10:35.982 fused_ordering(965) 00:10:35.982 fused_ordering(966) 00:10:35.982 fused_ordering(967) 00:10:35.982 fused_ordering(968) 00:10:35.982 fused_ordering(969) 00:10:35.982 fused_ordering(970) 00:10:35.982 fused_ordering(971) 00:10:35.982 fused_ordering(972) 00:10:35.982 fused_ordering(973) 00:10:35.982 fused_ordering(974) 00:10:35.982 fused_ordering(975) 00:10:35.982 fused_ordering(976) 00:10:35.982 fused_ordering(977) 00:10:35.982 fused_ordering(978) 00:10:35.982 fused_ordering(979) 00:10:35.982 fused_ordering(980) 00:10:35.982 fused_ordering(981) 00:10:35.982 fused_ordering(982) 00:10:35.982 fused_ordering(983) 00:10:35.982 fused_ordering(984) 00:10:35.982 fused_ordering(985) 00:10:35.982 fused_ordering(986) 00:10:35.982 fused_ordering(987) 00:10:35.982 fused_ordering(988) 00:10:35.982 fused_ordering(989) 00:10:35.982 fused_ordering(990) 00:10:35.982 fused_ordering(991) 00:10:35.982 fused_ordering(992) 00:10:35.982 fused_ordering(993) 00:10:35.982 fused_ordering(994) 00:10:35.982 fused_ordering(995) 00:10:35.982 fused_ordering(996) 00:10:35.982 fused_ordering(997) 00:10:35.982 fused_ordering(998) 00:10:35.982 fused_ordering(999) 00:10:35.982 fused_ordering(1000) 00:10:35.982 fused_ordering(1001) 00:10:35.982 fused_ordering(1002) 00:10:35.982 fused_ordering(1003) 00:10:35.982 fused_ordering(1004) 00:10:35.982 fused_ordering(1005) 00:10:35.982 fused_ordering(1006) 00:10:35.982 fused_ordering(1007) 00:10:35.982 fused_ordering(1008) 00:10:35.982 fused_ordering(1009) 00:10:35.982 fused_ordering(1010) 00:10:35.982 fused_ordering(1011) 00:10:35.982 fused_ordering(1012) 00:10:35.982 fused_ordering(1013) 00:10:35.982 fused_ordering(1014) 00:10:35.982 fused_ordering(1015) 00:10:35.982 fused_ordering(1016) 00:10:35.982 fused_ordering(1017) 00:10:35.982 fused_ordering(1018) 00:10:35.982 fused_ordering(1019) 00:10:35.982 fused_ordering(1020) 00:10:35.982 fused_ordering(1021) 00:10:35.982 fused_ordering(1022) 00:10:35.982 fused_ordering(1023) 00:10:35.982 18:25:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:10:35.982 18:25:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:10:35.982 18:25:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:35.982 18:25:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:10:35.982 18:25:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:35.982 18:25:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:10:35.982 18:25:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:35.982 18:25:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:35.982 rmmod nvme_tcp 00:10:35.982 rmmod nvme_fabrics 00:10:35.982 rmmod nvme_keyring 00:10:35.982 18:25:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:35.982 18:25:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:10:35.982 18:25:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:10:35.982 18:25:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3821375 ']' 00:10:35.982 18:25:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3821375 00:10:35.982 18:25:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 3821375 ']' 00:10:35.982 18:25:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 3821375 00:10:35.982 18:25:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:10:35.982 18:25:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:35.982 18:25:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3821375 00:10:35.982 18:25:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:35.982 18:25:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:35.982 18:25:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3821375' 00:10:35.982 killing process with pid 3821375 00:10:35.982 18:25:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 3821375 00:10:35.982 18:25:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 3821375 00:10:36.240 18:25:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:36.240 18:25:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:36.240 18:25:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:36.240 18:25:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:36.240 18:25:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:36.240 18:25:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.240 18:25:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:36.240 18:25:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.143 18:25:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:38.143 00:10:38.143 real 0m10.806s 00:10:38.143 user 0m5.461s 00:10:38.143 sys 0m5.347s 00:10:38.143 18:25:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:38.143 18:25:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:38.143 ************************************ 00:10:38.143 END TEST nvmf_fused_ordering 00:10:38.143 ************************************ 00:10:38.407 18:25:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:38.407 18:25:23 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:38.407 18:25:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:38.407 18:25:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:38.407 18:25:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:38.407 ************************************ 00:10:38.407 START TEST nvmf_delete_subsystem 00:10:38.407 ************************************ 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:38.407 * Looking for test storage... 00:10:38.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:38.407 18:25:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:44.974 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:44.974 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:44.974 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:44.975 Found net devices under 0000:86:00.0: cvl_0_0 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:44.975 Found net devices under 0000:86:00.1: cvl_0_1 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:44.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:44.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:10:44.975 00:10:44.975 --- 10.0.0.2 ping statistics --- 00:10:44.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.975 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:44.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:44.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:10:44.975 00:10:44.975 --- 10.0.0.1 ping statistics --- 00:10:44.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.975 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3825165 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3825165 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 3825165 ']' 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:44.975 18:25:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:44.975 [2024-07-15 18:25:29.724547] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:10:44.975 [2024-07-15 18:25:29.724593] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:44.975 EAL: No free 2048 kB hugepages reported on node 1 00:10:44.975 [2024-07-15 18:25:29.794023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:44.975 [2024-07-15 18:25:29.872700] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:44.975 [2024-07-15 18:25:29.872734] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:44.975 [2024-07-15 18:25:29.872741] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:44.975 [2024-07-15 18:25:29.872748] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:44.975 [2024-07-15 18:25:29.872752] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:44.975 [2024-07-15 18:25:29.872799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.975 [2024-07-15 18:25:29.872801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.975 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:45.233 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:10:45.233 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:45.233 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:45.233 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:45.233 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:45.233 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:45.233 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.233 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:45.233 [2024-07-15 18:25:30.572439] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:45.233 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.233 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:45.233 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.233 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:45.233 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.233 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:45.233 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.233 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:45.233 [2024-07-15 18:25:30.592585] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:45.233 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.233 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:45.233 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.233 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:45.233 NULL1 00:10:45.233 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.233 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:45.233 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.233 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:45.233 Delay0 00:10:45.233 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.233 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.233 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.233 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:45.233 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.233 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3825402 00:10:45.233 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:45.233 18:25:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:45.233 EAL: No free 2048 kB hugepages reported on node 1 00:10:45.233 [2024-07-15 18:25:30.683322] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:47.133 18:25:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:47.133 18:25:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.133 18:25:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:47.391 Write completed with error (sct=0, sc=8) 00:10:47.391 Write completed with error (sct=0, sc=8) 00:10:47.391 starting I/O failed: -6 00:10:47.391 Read completed with error (sct=0, sc=8) 00:10:47.391 Read completed with error (sct=0, sc=8) 00:10:47.391 Read completed with error (sct=0, sc=8) 00:10:47.391 Write completed with error (sct=0, sc=8) 00:10:47.391 starting I/O failed: -6 00:10:47.391 Write completed with error (sct=0, sc=8) 00:10:47.391 Read completed with error (sct=0, sc=8) 00:10:47.391 Read completed with error (sct=0, sc=8) 00:10:47.391 Read completed with error (sct=0, sc=8) 00:10:47.391 starting I/O failed: -6 00:10:47.391 Write completed with error (sct=0, sc=8) 00:10:47.391 Write completed with error (sct=0, sc=8) 00:10:47.391 Read completed with error (sct=0, sc=8) 00:10:47.391 Write completed with error (sct=0, sc=8) 00:10:47.391 starting I/O failed: -6 00:10:47.391 Read completed with error (sct=0, sc=8) 00:10:47.391 Write completed with error (sct=0, sc=8) 00:10:47.391 Read completed with error (sct=0, sc=8) 00:10:47.391 Write completed with error (sct=0, sc=8) 00:10:47.391 starting I/O failed: -6 00:10:47.391 Read completed with error (sct=0, sc=8) 00:10:47.391 Read completed with error (sct=0, sc=8) 00:10:47.391 Read completed with error (sct=0, sc=8) 00:10:47.391 Read completed with error (sct=0, sc=8) 00:10:47.391 starting I/O failed: -6 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 starting I/O failed: -6 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 starting I/O failed: -6 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 starting I/O failed: -6 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 starting I/O failed: -6 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 starting I/O failed: -6 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 [2024-07-15 18:25:32.850377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24205c0 is same with the state(5) to be set 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 starting I/O failed: -6 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 starting I/O failed: -6 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 starting I/O failed: -6 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 starting I/O failed: -6 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 starting I/O failed: -6 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 starting I/O failed: -6 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 starting I/O failed: -6 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 starting I/O failed: -6 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 starting I/O failed: -6 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 starting I/O failed: -6 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 starting I/O failed: -6 00:10:47.392 [2024-07-15 18:25:32.851866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb364000c00 is same with the state(5) to be set 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:47.392 Read completed with error (sct=0, sc=8) 00:10:47.392 Write completed with error (sct=0, sc=8) 00:10:48.335 [2024-07-15 18:25:33.820012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421ac0 is same with the state(5) to be set 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 [2024-07-15 18:25:33.854403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb36400cfe0 is same with the state(5) to be set 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 [2024-07-15 18:25:33.854700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb36400d760 is same with the state(5) to be set 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 [2024-07-15 18:25:33.854810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24203e0 is same with the state(5) to be set 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Write completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 Read completed with error (sct=0, sc=8) 00:10:48.335 [2024-07-15 18:25:33.855251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24207a0 is same with the state(5) to be set 00:10:48.335 Initializing NVMe Controllers 00:10:48.335 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:48.335 Controller IO queue size 128, less than required. 00:10:48.335 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:48.335 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:48.335 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:48.335 Initialization complete. Launching workers. 00:10:48.335 ======================================================== 00:10:48.335 Latency(us) 00:10:48.335 Device Information : IOPS MiB/s Average min max 00:10:48.335 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.07 0.08 897930.15 263.81 1009977.87 00:10:48.335 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.09 0.08 971299.31 254.41 2002069.31 00:10:48.335 ======================================================== 00:10:48.336 Total : 332.16 0.16 934175.39 254.41 2002069.31 00:10:48.336 00:10:48.336 [2024-07-15 18:25:33.855553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2421ac0 (9): Bad file descriptor 00:10:48.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:48.336 18:25:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.336 18:25:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:48.336 18:25:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3825402 00:10:48.336 18:25:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:48.903 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:48.903 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3825402 00:10:48.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3825402) - No such process 00:10:48.903 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3825402 00:10:48.903 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:10:48.903 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3825402 00:10:48.903 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:10:48.903 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:48.903 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:10:48.903 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:48.903 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3825402 00:10:48.903 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:10:48.903 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:48.903 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:48.903 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:48.903 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:48.903 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.903 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.903 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.903 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:48.903 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.903 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.903 [2024-07-15 18:25:34.386603] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:48.903 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.903 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.903 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.903 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.903 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.903 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3826094 00:10:48.903 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:48.903 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:48.903 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3826094 00:10:48.903 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:48.903 EAL: No free 2048 kB hugepages reported on node 1 00:10:49.161 [2024-07-15 18:25:34.464465] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:49.419 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:49.419 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3826094 00:10:49.419 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:49.985 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:49.985 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3826094 00:10:49.985 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:50.551 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:50.551 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3826094 00:10:50.551 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:51.116 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:51.116 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3826094 00:10:51.116 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:51.373 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:51.373 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3826094 00:10:51.373 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:51.938 18:25:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:51.938 18:25:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3826094 00:10:51.938 18:25:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:52.196 Initializing NVMe Controllers 00:10:52.196 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:52.196 Controller IO queue size 128, less than required. 00:10:52.196 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:52.196 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:52.196 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:52.196 Initialization complete. Launching workers. 00:10:52.196 ======================================================== 00:10:52.196 Latency(us) 00:10:52.196 Device Information : IOPS MiB/s Average min max 00:10:52.196 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001852.40 1000112.92 1006285.35 00:10:52.196 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003483.55 1000103.28 1041151.46 00:10:52.196 ======================================================== 00:10:52.196 Total : 256.00 0.12 1002667.97 1000103.28 1041151.46 00:10:52.196 00:10:52.455 18:25:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:52.455 18:25:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3826094 00:10:52.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3826094) - No such process 00:10:52.455 18:25:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3826094 00:10:52.455 18:25:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:52.455 18:25:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:52.455 18:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:52.455 18:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:10:52.455 18:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:52.455 18:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:10:52.455 18:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:52.455 18:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:52.455 rmmod nvme_tcp 00:10:52.455 rmmod nvme_fabrics 00:10:52.455 rmmod nvme_keyring 00:10:52.455 18:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:52.455 18:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:10:52.455 18:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:10:52.455 18:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3825165 ']' 00:10:52.455 18:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3825165 00:10:52.455 18:25:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 3825165 ']' 00:10:52.455 18:25:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 3825165 00:10:52.455 18:25:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:10:52.455 18:25:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:52.455 18:25:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3825165 00:10:52.714 18:25:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:52.714 18:25:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:52.714 18:25:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3825165' 00:10:52.714 killing process with pid 3825165 00:10:52.714 18:25:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 3825165 00:10:52.714 18:25:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 3825165 00:10:52.714 18:25:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:52.714 18:25:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:52.714 18:25:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:52.714 18:25:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:52.714 18:25:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:52.714 18:25:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.714 18:25:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:52.714 18:25:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.248 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:55.248 00:10:55.248 real 0m16.547s 00:10:55.248 user 0m30.554s 00:10:55.248 sys 0m5.224s 00:10:55.248 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:55.248 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:55.248 ************************************ 00:10:55.248 END TEST nvmf_delete_subsystem 00:10:55.248 ************************************ 00:10:55.248 18:25:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:55.248 18:25:40 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:10:55.248 18:25:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:55.248 18:25:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:55.248 18:25:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:55.248 ************************************ 00:10:55.248 START TEST nvmf_ns_masking 00:10:55.248 ************************************ 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:10:55.248 * Looking for test storage... 00:10:55.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=1cf801f9-f533-4055-9831-8d02a8dcbca3 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=17f93c23-9ab9-4eb1-9c7b-a9db8086ea02 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=a982f449-bd3d-4591-90fe-424e5e0ca831 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:10:55.248 18:25:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:00.522 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.522 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:00.523 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:00.523 Found net devices under 0000:86:00.0: cvl_0_0 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:00.523 Found net devices under 0000:86:00.1: cvl_0_1 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.523 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:00.523 18:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:00.523 18:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:00.523 18:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:00.782 18:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:00.782 18:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:00.782 18:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:00.782 18:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:00.782 18:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:00.782 18:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:00.782 18:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:00.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:11:00.782 00:11:00.782 --- 10.0.0.2 ping statistics --- 00:11:00.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.782 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:11:00.782 18:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:00.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:11:00.782 00:11:00.782 --- 10.0.0.1 ping statistics --- 00:11:00.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.782 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:11:00.782 18:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.782 18:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:00.782 18:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:00.782 18:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.782 18:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:00.782 18:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:00.782 18:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.782 18:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:00.782 18:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:00.782 18:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:00.782 18:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:00.782 18:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:00.782 18:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:00.782 18:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3830090 00:11:00.782 18:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:00.782 18:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3830090 00:11:00.782 18:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3830090 ']' 00:11:00.782 18:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.782 18:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:00.782 18:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.782 18:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:00.782 18:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:00.782 [2024-07-15 18:25:46.333223] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:11:00.782 [2024-07-15 18:25:46.333264] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.041 EAL: No free 2048 kB hugepages reported on node 1 00:11:01.041 [2024-07-15 18:25:46.401615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.041 [2024-07-15 18:25:46.473824] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.041 [2024-07-15 18:25:46.473864] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.041 [2024-07-15 18:25:46.473871] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.041 [2024-07-15 18:25:46.473876] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.041 [2024-07-15 18:25:46.473881] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.041 [2024-07-15 18:25:46.473917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.634 18:25:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:01.634 18:25:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:01.634 18:25:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:01.634 18:25:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:01.634 18:25:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:01.634 18:25:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.634 18:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:01.893 [2024-07-15 18:25:47.312583] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:01.893 18:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:01.893 18:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:01.893 18:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:02.152 Malloc1 00:11:02.152 18:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:02.152 Malloc2 00:11:02.410 18:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:02.410 18:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:02.669 18:25:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:02.669 [2024-07-15 18:25:48.205511] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:02.928 18:25:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:02.928 18:25:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a982f449-bd3d-4591-90fe-424e5e0ca831 -a 10.0.0.2 -s 4420 -i 4 00:11:02.928 18:25:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:02.928 18:25:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:02.928 18:25:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:02.928 18:25:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:02.928 18:25:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:05.460 [ 0]:0x1 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cb441ce6f300410681fb485efeca90f5 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cb441ce6f300410681fb485efeca90f5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:05.460 [ 0]:0x1 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cb441ce6f300410681fb485efeca90f5 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cb441ce6f300410681fb485efeca90f5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:05.460 [ 1]:0x2 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9e29ed91aae4492eaf1d5c298c3ffc01 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9e29ed91aae4492eaf1d5c298c3ffc01 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:05.460 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:05.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.719 18:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.977 18:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:06.235 18:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:06.235 18:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a982f449-bd3d-4591-90fe-424e5e0ca831 -a 10.0.0.2 -s 4420 -i 4 00:11:06.235 18:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:06.235 18:25:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:06.236 18:25:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:06.236 18:25:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:11:06.236 18:25:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:11:06.236 18:25:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:08.139 18:25:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:08.398 [ 0]:0x2 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9e29ed91aae4492eaf1d5c298c3ffc01 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9e29ed91aae4492eaf1d5c298c3ffc01 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:08.398 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:08.658 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:08.658 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:08.658 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:08.658 [ 0]:0x1 00:11:08.658 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:08.658 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:08.658 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cb441ce6f300410681fb485efeca90f5 00:11:08.658 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cb441ce6f300410681fb485efeca90f5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:08.658 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:08.658 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:08.658 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:08.658 [ 1]:0x2 00:11:08.658 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:08.658 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:08.658 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9e29ed91aae4492eaf1d5c298c3ffc01 00:11:08.658 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9e29ed91aae4492eaf1d5c298c3ffc01 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:08.658 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:08.917 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:08.917 18:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:08.917 18:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:08.917 18:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:08.917 18:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:08.917 18:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:08.917 18:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:08.917 18:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:08.917 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:08.917 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:08.917 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:08.917 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:08.917 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:08.917 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:08.917 18:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:08.917 18:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:08.917 18:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:08.917 18:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:08.917 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:08.917 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:08.917 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:08.917 [ 0]:0x2 00:11:08.917 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:08.917 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:08.917 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9e29ed91aae4492eaf1d5c298c3ffc01 00:11:08.917 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9e29ed91aae4492eaf1d5c298c3ffc01 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:08.917 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:08.917 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:09.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.175 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:09.175 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:09.175 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a982f449-bd3d-4591-90fe-424e5e0ca831 -a 10.0.0.2 -s 4420 -i 4 00:11:09.433 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:09.433 18:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:09.433 18:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:09.433 18:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:09.433 18:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:09.433 18:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:11.961 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:11.961 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:11.961 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:11.961 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:11.961 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:11.961 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:11.961 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:11.961 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:11.961 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:11.961 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:11.961 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:11.961 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:11.961 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:11.961 [ 0]:0x1 00:11:11.961 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:11.961 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cb441ce6f300410681fb485efeca90f5 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cb441ce6f300410681fb485efeca90f5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:11.961 [ 1]:0x2 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9e29ed91aae4492eaf1d5c298c3ffc01 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9e29ed91aae4492eaf1d5c298c3ffc01 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:11.961 [ 0]:0x2 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9e29ed91aae4492eaf1d5c298c3ffc01 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9e29ed91aae4492eaf1d5c298c3ffc01 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:11.961 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:12.219 [2024-07-15 18:25:57.552022] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:12.219 request: 00:11:12.219 { 00:11:12.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:12.219 "nsid": 2, 00:11:12.219 "host": "nqn.2016-06.io.spdk:host1", 00:11:12.219 "method": "nvmf_ns_remove_host", 00:11:12.219 "req_id": 1 00:11:12.219 } 00:11:12.219 Got JSON-RPC error response 00:11:12.219 response: 00:11:12.219 { 00:11:12.219 "code": -32602, 00:11:12.219 "message": "Invalid parameters" 00:11:12.219 } 00:11:12.219 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:12.219 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:12.219 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:12.219 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:12.219 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:12.219 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:12.219 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:12.219 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:12.219 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:12.219 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:12.219 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:12.219 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:12.219 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:12.219 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:12.219 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:12.219 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:12.219 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:12.219 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.219 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:12.219 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:12.219 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:12.219 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:12.219 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:12.219 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:12.219 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:12.219 [ 0]:0x2 00:11:12.219 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:12.219 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:12.219 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9e29ed91aae4492eaf1d5c298c3ffc01 00:11:12.219 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9e29ed91aae4492eaf1d5c298c3ffc01 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.219 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:12.219 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:12.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.478 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3832137 00:11:12.478 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:12.478 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:12.478 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3832137 /var/tmp/host.sock 00:11:12.478 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3832137 ']' 00:11:12.478 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:11:12.478 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:12.478 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:12.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:12.478 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:12.478 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:12.478 [2024-07-15 18:25:57.884833] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:11:12.478 [2024-07-15 18:25:57.884879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3832137 ] 00:11:12.478 EAL: No free 2048 kB hugepages reported on node 1 00:11:12.478 [2024-07-15 18:25:57.937585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.478 [2024-07-15 18:25:58.012830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.413 18:25:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:13.413 18:25:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:13.413 18:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.413 18:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:13.672 18:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 1cf801f9-f533-4055-9831-8d02a8dcbca3 00:11:13.672 18:25:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:13.672 18:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 1CF801F9F533405598318D02A8DCBCA3 -i 00:11:13.672 18:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 17f93c23-9ab9-4eb1-9c7b-a9db8086ea02 00:11:13.672 18:25:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:13.930 18:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 17F93C239AB94EB19C7BA9DB8086EA02 -i 00:11:13.930 18:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:14.190 18:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:14.449 18:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:14.449 18:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:14.707 nvme0n1 00:11:14.707 18:26:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:14.707 18:26:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:14.967 nvme1n2 00:11:14.967 18:26:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:14.967 18:26:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:14.967 18:26:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:14.967 18:26:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:14.967 18:26:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:14.967 18:26:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:14.967 18:26:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:14.967 18:26:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:14.967 18:26:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:15.227 18:26:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 1cf801f9-f533-4055-9831-8d02a8dcbca3 == \1\c\f\8\0\1\f\9\-\f\5\3\3\-\4\0\5\5\-\9\8\3\1\-\8\d\0\2\a\8\d\c\b\c\a\3 ]] 00:11:15.227 18:26:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:15.227 18:26:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:15.227 18:26:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:15.486 18:26:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 17f93c23-9ab9-4eb1-9c7b-a9db8086ea02 == \1\7\f\9\3\c\2\3\-\9\a\b\9\-\4\e\b\1\-\9\c\7\b\-\a\9\d\b\8\0\8\6\e\a\0\2 ]] 00:11:15.486 18:26:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3832137 00:11:15.486 18:26:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3832137 ']' 00:11:15.486 18:26:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3832137 00:11:15.486 18:26:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:15.486 18:26:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:15.486 18:26:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3832137 00:11:15.486 18:26:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:15.486 18:26:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:15.486 18:26:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3832137' 00:11:15.486 killing process with pid 3832137 00:11:15.486 18:26:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3832137 00:11:15.486 18:26:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3832137 00:11:15.746 18:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.005 18:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:11:16.005 18:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:11:16.005 18:26:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:16.005 18:26:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:16.005 18:26:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:16.005 18:26:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:16.005 18:26:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:16.005 18:26:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:16.005 rmmod nvme_tcp 00:11:16.005 rmmod nvme_fabrics 00:11:16.005 rmmod nvme_keyring 00:11:16.005 18:26:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:16.005 18:26:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:16.005 18:26:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:16.005 18:26:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3830090 ']' 00:11:16.005 18:26:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3830090 00:11:16.005 18:26:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3830090 ']' 00:11:16.005 18:26:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3830090 00:11:16.005 18:26:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:16.005 18:26:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:16.005 18:26:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3830090 00:11:16.005 18:26:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:16.005 18:26:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:16.005 18:26:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3830090' 00:11:16.005 killing process with pid 3830090 00:11:16.005 18:26:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3830090 00:11:16.005 18:26:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3830090 00:11:16.264 18:26:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:16.264 18:26:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:16.264 18:26:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:16.264 18:26:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:16.264 18:26:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:16.264 18:26:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.264 18:26:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:16.264 18:26:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.799 18:26:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:18.799 00:11:18.799 real 0m23.444s 00:11:18.799 user 0m25.034s 00:11:18.799 sys 0m6.446s 00:11:18.799 18:26:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:18.799 18:26:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:18.799 ************************************ 00:11:18.799 END TEST nvmf_ns_masking 00:11:18.799 ************************************ 00:11:18.799 18:26:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:18.799 18:26:03 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:18.799 18:26:03 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:18.799 18:26:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:18.799 18:26:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:18.799 18:26:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:18.799 ************************************ 00:11:18.799 START TEST nvmf_nvme_cli 00:11:18.799 ************************************ 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:18.799 * Looking for test storage... 00:11:18.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:18.799 18:26:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.799 18:26:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:18.799 18:26:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:18.799 18:26:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:18.799 18:26:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:24.071 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:24.071 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:24.071 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:24.071 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:24.071 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:24.071 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:24.071 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:24.071 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:24.071 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:24.071 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:24.071 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:24.071 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:24.071 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:24.071 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:24.071 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:24.071 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:24.071 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:24.071 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:24.071 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:24.071 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:24.071 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:24.071 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:24.071 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:24.071 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:24.071 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:24.072 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:24.072 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:24.072 Found net devices under 0000:86:00.0: cvl_0_0 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:24.072 Found net devices under 0000:86:00.1: cvl_0_1 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:24.072 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:24.331 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:24.331 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:24.331 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:24.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:24.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:11:24.331 00:11:24.331 --- 10.0.0.2 ping statistics --- 00:11:24.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.331 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:11:24.331 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:24.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:24.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:11:24.331 00:11:24.331 --- 10.0.0.1 ping statistics --- 00:11:24.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.331 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:11:24.331 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:24.331 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:24.331 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:24.331 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:24.331 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:24.331 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:24.331 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:24.331 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:24.331 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:24.331 18:26:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:24.331 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:24.331 18:26:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:24.331 18:26:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:24.331 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3836330 00:11:24.331 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3836330 00:11:24.331 18:26:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:24.331 18:26:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 3836330 ']' 00:11:24.331 18:26:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.331 18:26:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:24.331 18:26:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.331 18:26:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:24.331 18:26:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:24.331 [2024-07-15 18:26:09.831669] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:11:24.331 [2024-07-15 18:26:09.831715] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.331 EAL: No free 2048 kB hugepages reported on node 1 00:11:24.590 [2024-07-15 18:26:09.902012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:24.590 [2024-07-15 18:26:09.976565] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:24.590 [2024-07-15 18:26:09.976605] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:24.590 [2024-07-15 18:26:09.976612] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:24.590 [2024-07-15 18:26:09.976618] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:24.590 [2024-07-15 18:26:09.976622] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:24.590 [2024-07-15 18:26:09.976757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.590 [2024-07-15 18:26:09.976866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:24.590 [2024-07-15 18:26:09.976972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:24.590 [2024-07-15 18:26:09.976971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.156 18:26:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:25.156 18:26:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:11:25.156 18:26:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:25.156 18:26:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:25.156 18:26:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:25.156 18:26:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.156 18:26:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:25.156 18:26:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.156 18:26:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:25.156 [2024-07-15 18:26:10.673085] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:25.156 18:26:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.156 18:26:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:25.156 18:26:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.156 18:26:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:25.156 Malloc0 00:11:25.156 18:26:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.156 18:26:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:25.156 18:26:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.156 18:26:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:25.415 Malloc1 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:25.415 [2024-07-15 18:26:10.754456] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:25.415 00:11:25.415 Discovery Log Number of Records 2, Generation counter 2 00:11:25.415 =====Discovery Log Entry 0====== 00:11:25.415 trtype: tcp 00:11:25.415 adrfam: ipv4 00:11:25.415 subtype: current discovery subsystem 00:11:25.415 treq: not required 00:11:25.415 portid: 0 00:11:25.415 trsvcid: 4420 00:11:25.415 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:25.415 traddr: 10.0.0.2 00:11:25.415 eflags: explicit discovery connections, duplicate discovery information 00:11:25.415 sectype: none 00:11:25.415 =====Discovery Log Entry 1====== 00:11:25.415 trtype: tcp 00:11:25.415 adrfam: ipv4 00:11:25.415 subtype: nvme subsystem 00:11:25.415 treq: not required 00:11:25.415 portid: 0 00:11:25.415 trsvcid: 4420 00:11:25.415 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:25.415 traddr: 10.0.0.2 00:11:25.415 eflags: none 00:11:25.415 sectype: none 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:25.415 18:26:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:26.786 18:26:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:26.786 18:26:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:11:26.786 18:26:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:26.786 18:26:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:26.786 18:26:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:26.786 18:26:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:11:28.684 18:26:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:28.684 18:26:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:28.684 18:26:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:28.684 18:26:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:28.684 18:26:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:28.684 18:26:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:11:28.684 18:26:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:28.684 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:28.684 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:28.684 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:28.684 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:28.684 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:28.684 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:28.684 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:28.684 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:28.684 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:28.684 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:28.684 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:28.684 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:28.684 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:28.684 18:26:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:28.684 /dev/nvme0n1 ]] 00:11:28.684 18:26:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:28.684 18:26:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:28.684 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:28.684 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:28.684 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:28.942 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:28.942 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:28.942 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:28.942 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:28.942 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:28.942 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:28.942 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:28.942 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:28.942 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:28.942 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:28.942 18:26:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:28.942 18:26:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:29.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:29.200 rmmod nvme_tcp 00:11:29.200 rmmod nvme_fabrics 00:11:29.200 rmmod nvme_keyring 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3836330 ']' 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3836330 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 3836330 ']' 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 3836330 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3836330 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3836330' 00:11:29.200 killing process with pid 3836330 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 3836330 00:11:29.200 18:26:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 3836330 00:11:29.458 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:29.458 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:29.458 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:29.458 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:29.458 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:29.458 18:26:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.458 18:26:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:29.458 18:26:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.433 18:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:31.692 00:11:31.692 real 0m13.115s 00:11:31.692 user 0m21.366s 00:11:31.692 sys 0m4.913s 00:11:31.692 18:26:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:31.692 18:26:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:31.692 ************************************ 00:11:31.692 END TEST nvmf_nvme_cli 00:11:31.692 ************************************ 00:11:31.692 18:26:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:31.692 18:26:17 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:11:31.692 18:26:17 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:31.692 18:26:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:31.692 18:26:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:31.692 18:26:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:31.692 ************************************ 00:11:31.692 START TEST nvmf_vfio_user 00:11:31.692 ************************************ 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:31.692 * Looking for test storage... 00:11:31.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3837626 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3837626' 00:11:31.692 Process pid: 3837626 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3837626 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3837626 ']' 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:31.692 18:26:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:31.692 [2024-07-15 18:26:17.237319] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:11:31.692 [2024-07-15 18:26:17.237380] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.950 EAL: No free 2048 kB hugepages reported on node 1 00:11:31.950 [2024-07-15 18:26:17.304509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:31.950 [2024-07-15 18:26:17.383657] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:31.950 [2024-07-15 18:26:17.383695] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:31.950 [2024-07-15 18:26:17.383701] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:31.950 [2024-07-15 18:26:17.383707] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:31.950 [2024-07-15 18:26:17.383711] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:31.950 [2024-07-15 18:26:17.383764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:31.950 [2024-07-15 18:26:17.383873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:31.950 [2024-07-15 18:26:17.383977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.950 [2024-07-15 18:26:17.383979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:32.514 18:26:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:32.514 18:26:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:11:32.514 18:26:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:33.888 18:26:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:11:33.888 18:26:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:33.888 18:26:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:33.888 18:26:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:33.888 18:26:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:33.888 18:26:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:33.888 Malloc1 00:11:34.145 18:26:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:34.146 18:26:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:34.403 18:26:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:34.662 18:26:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:34.662 18:26:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:34.662 18:26:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:34.662 Malloc2 00:11:34.662 18:26:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:34.919 18:26:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:35.177 18:26:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:35.436 18:26:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:11:35.436 18:26:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:11:35.436 18:26:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:35.436 18:26:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:35.436 18:26:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:11:35.436 18:26:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:35.436 [2024-07-15 18:26:20.767700] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:11:35.436 [2024-07-15 18:26:20.767732] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3838331 ] 00:11:35.436 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.436 [2024-07-15 18:26:20.796629] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:11:35.436 [2024-07-15 18:26:20.804672] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:35.436 [2024-07-15 18:26:20.804690] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f40287d9000 00:11:35.436 [2024-07-15 18:26:20.805674] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:35.436 [2024-07-15 18:26:20.806670] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:35.436 [2024-07-15 18:26:20.807671] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:35.436 [2024-07-15 18:26:20.808682] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:35.436 [2024-07-15 18:26:20.809679] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:35.436 [2024-07-15 18:26:20.810692] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:35.436 [2024-07-15 18:26:20.811698] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:35.436 [2024-07-15 18:26:20.812702] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:35.436 [2024-07-15 18:26:20.813713] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:35.436 [2024-07-15 18:26:20.813722] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f40287ce000 00:11:35.436 [2024-07-15 18:26:20.814636] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:35.436 [2024-07-15 18:26:20.824078] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:11:35.436 [2024-07-15 18:26:20.824098] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:11:35.436 [2024-07-15 18:26:20.828800] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:35.436 [2024-07-15 18:26:20.828834] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:35.436 [2024-07-15 18:26:20.828898] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:11:35.436 [2024-07-15 18:26:20.828914] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:11:35.436 [2024-07-15 18:26:20.828919] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:11:35.436 [2024-07-15 18:26:20.829800] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:11:35.436 [2024-07-15 18:26:20.829808] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:11:35.436 [2024-07-15 18:26:20.829814] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:11:35.436 [2024-07-15 18:26:20.830805] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:35.436 [2024-07-15 18:26:20.830813] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:11:35.436 [2024-07-15 18:26:20.830819] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:11:35.436 [2024-07-15 18:26:20.831810] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:11:35.436 [2024-07-15 18:26:20.831817] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:35.436 [2024-07-15 18:26:20.832820] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:11:35.436 [2024-07-15 18:26:20.832830] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:11:35.436 [2024-07-15 18:26:20.832834] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:11:35.436 [2024-07-15 18:26:20.832839] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:35.436 [2024-07-15 18:26:20.832944] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:11:35.436 [2024-07-15 18:26:20.832948] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:35.436 [2024-07-15 18:26:20.832953] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:11:35.436 [2024-07-15 18:26:20.833828] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:11:35.436 [2024-07-15 18:26:20.834831] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:11:35.436 [2024-07-15 18:26:20.835838] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:35.436 [2024-07-15 18:26:20.836834] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:35.436 [2024-07-15 18:26:20.836898] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:35.436 [2024-07-15 18:26:20.837846] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:11:35.436 [2024-07-15 18:26:20.837853] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:35.436 [2024-07-15 18:26:20.837857] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:11:35.436 [2024-07-15 18:26:20.837873] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:11:35.436 [2024-07-15 18:26:20.837883] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:11:35.436 [2024-07-15 18:26:20.837897] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:35.436 [2024-07-15 18:26:20.837901] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:35.436 [2024-07-15 18:26:20.837913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:35.436 [2024-07-15 18:26:20.837956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:35.436 [2024-07-15 18:26:20.837965] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:11:35.436 [2024-07-15 18:26:20.837971] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:11:35.436 [2024-07-15 18:26:20.837975] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:11:35.436 [2024-07-15 18:26:20.837979] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:35.436 [2024-07-15 18:26:20.837983] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:11:35.436 [2024-07-15 18:26:20.837990] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:11:35.436 [2024-07-15 18:26:20.837994] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:11:35.436 [2024-07-15 18:26:20.838001] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:11:35.436 [2024-07-15 18:26:20.838010] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:35.436 [2024-07-15 18:26:20.838022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:35.436 [2024-07-15 18:26:20.838034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.436 [2024-07-15 18:26:20.838042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.436 [2024-07-15 18:26:20.838049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.436 [2024-07-15 18:26:20.838056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.436 [2024-07-15 18:26:20.838060] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:11:35.436 [2024-07-15 18:26:20.838067] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:35.436 [2024-07-15 18:26:20.838076] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:35.436 [2024-07-15 18:26:20.838086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:35.436 [2024-07-15 18:26:20.838092] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:11:35.436 [2024-07-15 18:26:20.838096] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:35.436 [2024-07-15 18:26:20.838101] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:11:35.436 [2024-07-15 18:26:20.838106] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:11:35.436 [2024-07-15 18:26:20.838114] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:35.436 [2024-07-15 18:26:20.838122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:35.436 [2024-07-15 18:26:20.838168] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:11:35.436 [2024-07-15 18:26:20.838174] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:11:35.436 [2024-07-15 18:26:20.838181] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:35.436 [2024-07-15 18:26:20.838184] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:35.436 [2024-07-15 18:26:20.838190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:35.436 [2024-07-15 18:26:20.838199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:35.436 [2024-07-15 18:26:20.838209] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:11:35.436 [2024-07-15 18:26:20.838216] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:11:35.436 [2024-07-15 18:26:20.838222] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:11:35.436 [2024-07-15 18:26:20.838228] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:35.436 [2024-07-15 18:26:20.838231] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:35.436 [2024-07-15 18:26:20.838237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:35.436 [2024-07-15 18:26:20.838259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:35.436 [2024-07-15 18:26:20.838270] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:35.436 [2024-07-15 18:26:20.838277] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:35.436 [2024-07-15 18:26:20.838283] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:35.436 [2024-07-15 18:26:20.838286] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:35.436 [2024-07-15 18:26:20.838292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:35.437 [2024-07-15 18:26:20.838302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:35.437 [2024-07-15 18:26:20.838309] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:35.437 [2024-07-15 18:26:20.838314] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:11:35.437 [2024-07-15 18:26:20.838321] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:11:35.437 [2024-07-15 18:26:20.838327] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:11:35.437 [2024-07-15 18:26:20.838331] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:35.437 [2024-07-15 18:26:20.838335] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:11:35.437 [2024-07-15 18:26:20.838345] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:11:35.437 [2024-07-15 18:26:20.838349] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:11:35.437 [2024-07-15 18:26:20.838353] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:11:35.437 [2024-07-15 18:26:20.838368] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:35.437 [2024-07-15 18:26:20.838377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:35.437 [2024-07-15 18:26:20.838387] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:35.437 [2024-07-15 18:26:20.838399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:35.437 [2024-07-15 18:26:20.838408] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:35.437 [2024-07-15 18:26:20.838419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:35.437 [2024-07-15 18:26:20.838429] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:35.437 [2024-07-15 18:26:20.838438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:35.437 [2024-07-15 18:26:20.838450] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:35.437 [2024-07-15 18:26:20.838454] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:35.437 [2024-07-15 18:26:20.838457] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:35.437 [2024-07-15 18:26:20.838460] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:35.437 [2024-07-15 18:26:20.838465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:35.437 [2024-07-15 18:26:20.838472] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:35.437 [2024-07-15 18:26:20.838475] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:35.437 [2024-07-15 18:26:20.838480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:35.437 [2024-07-15 18:26:20.838487] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:35.437 [2024-07-15 18:26:20.838490] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:35.437 [2024-07-15 18:26:20.838495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:35.437 [2024-07-15 18:26:20.838501] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:35.437 [2024-07-15 18:26:20.838505] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:35.437 [2024-07-15 18:26:20.838510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:35.437 [2024-07-15 18:26:20.838516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:35.437 [2024-07-15 18:26:20.838527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:35.437 [2024-07-15 18:26:20.838536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:35.437 [2024-07-15 18:26:20.838542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:35.437 ===================================================== 00:11:35.437 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:35.437 ===================================================== 00:11:35.437 Controller Capabilities/Features 00:11:35.437 ================================ 00:11:35.437 Vendor ID: 4e58 00:11:35.437 Subsystem Vendor ID: 4e58 00:11:35.437 Serial Number: SPDK1 00:11:35.437 Model Number: SPDK bdev Controller 00:11:35.437 Firmware Version: 24.09 00:11:35.437 Recommended Arb Burst: 6 00:11:35.437 IEEE OUI Identifier: 8d 6b 50 00:11:35.437 Multi-path I/O 00:11:35.437 May have multiple subsystem ports: Yes 00:11:35.437 May have multiple controllers: Yes 00:11:35.437 Associated with SR-IOV VF: No 00:11:35.437 Max Data Transfer Size: 131072 00:11:35.437 Max Number of Namespaces: 32 00:11:35.437 Max Number of I/O Queues: 127 00:11:35.437 NVMe Specification Version (VS): 1.3 00:11:35.437 NVMe Specification Version (Identify): 1.3 00:11:35.437 Maximum Queue Entries: 256 00:11:35.437 Contiguous Queues Required: Yes 00:11:35.437 Arbitration Mechanisms Supported 00:11:35.437 Weighted Round Robin: Not Supported 00:11:35.437 Vendor Specific: Not Supported 00:11:35.437 Reset Timeout: 15000 ms 00:11:35.437 Doorbell Stride: 4 bytes 00:11:35.437 NVM Subsystem Reset: Not Supported 00:11:35.437 Command Sets Supported 00:11:35.437 NVM Command Set: Supported 00:11:35.437 Boot Partition: Not Supported 00:11:35.437 Memory Page Size Minimum: 4096 bytes 00:11:35.437 Memory Page Size Maximum: 4096 bytes 00:11:35.437 Persistent Memory Region: Not Supported 00:11:35.437 Optional Asynchronous Events Supported 00:11:35.437 Namespace Attribute Notices: Supported 00:11:35.437 Firmware Activation Notices: Not Supported 00:11:35.437 ANA Change Notices: Not Supported 00:11:35.437 PLE Aggregate Log Change Notices: Not Supported 00:11:35.437 LBA Status Info Alert Notices: Not Supported 00:11:35.437 EGE Aggregate Log Change Notices: Not Supported 00:11:35.437 Normal NVM Subsystem Shutdown event: Not Supported 00:11:35.437 Zone Descriptor Change Notices: Not Supported 00:11:35.437 Discovery Log Change Notices: Not Supported 00:11:35.437 Controller Attributes 00:11:35.437 128-bit Host Identifier: Supported 00:11:35.437 Non-Operational Permissive Mode: Not Supported 00:11:35.437 NVM Sets: Not Supported 00:11:35.437 Read Recovery Levels: Not Supported 00:11:35.437 Endurance Groups: Not Supported 00:11:35.437 Predictable Latency Mode: Not Supported 00:11:35.437 Traffic Based Keep ALive: Not Supported 00:11:35.437 Namespace Granularity: Not Supported 00:11:35.437 SQ Associations: Not Supported 00:11:35.437 UUID List: Not Supported 00:11:35.437 Multi-Domain Subsystem: Not Supported 00:11:35.437 Fixed Capacity Management: Not Supported 00:11:35.437 Variable Capacity Management: Not Supported 00:11:35.437 Delete Endurance Group: Not Supported 00:11:35.437 Delete NVM Set: Not Supported 00:11:35.437 Extended LBA Formats Supported: Not Supported 00:11:35.437 Flexible Data Placement Supported: Not Supported 00:11:35.437 00:11:35.437 Controller Memory Buffer Support 00:11:35.437 ================================ 00:11:35.437 Supported: No 00:11:35.437 00:11:35.437 Persistent Memory Region Support 00:11:35.437 ================================ 00:11:35.437 Supported: No 00:11:35.437 00:11:35.437 Admin Command Set Attributes 00:11:35.437 ============================ 00:11:35.437 Security Send/Receive: Not Supported 00:11:35.437 Format NVM: Not Supported 00:11:35.437 Firmware Activate/Download: Not Supported 00:11:35.437 Namespace Management: Not Supported 00:11:35.437 Device Self-Test: Not Supported 00:11:35.437 Directives: Not Supported 00:11:35.437 NVMe-MI: Not Supported 00:11:35.437 Virtualization Management: Not Supported 00:11:35.437 Doorbell Buffer Config: Not Supported 00:11:35.437 Get LBA Status Capability: Not Supported 00:11:35.437 Command & Feature Lockdown Capability: Not Supported 00:11:35.437 Abort Command Limit: 4 00:11:35.437 Async Event Request Limit: 4 00:11:35.437 Number of Firmware Slots: N/A 00:11:35.437 Firmware Slot 1 Read-Only: N/A 00:11:35.437 Firmware Activation Without Reset: N/A 00:11:35.437 Multiple Update Detection Support: N/A 00:11:35.437 Firmware Update Granularity: No Information Provided 00:11:35.437 Per-Namespace SMART Log: No 00:11:35.437 Asymmetric Namespace Access Log Page: Not Supported 00:11:35.437 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:11:35.437 Command Effects Log Page: Supported 00:11:35.437 Get Log Page Extended Data: Supported 00:11:35.437 Telemetry Log Pages: Not Supported 00:11:35.437 Persistent Event Log Pages: Not Supported 00:11:35.437 Supported Log Pages Log Page: May Support 00:11:35.437 Commands Supported & Effects Log Page: Not Supported 00:11:35.437 Feature Identifiers & Effects Log Page:May Support 00:11:35.437 NVMe-MI Commands & Effects Log Page: May Support 00:11:35.437 Data Area 4 for Telemetry Log: Not Supported 00:11:35.437 Error Log Page Entries Supported: 128 00:11:35.437 Keep Alive: Supported 00:11:35.437 Keep Alive Granularity: 10000 ms 00:11:35.437 00:11:35.437 NVM Command Set Attributes 00:11:35.437 ========================== 00:11:35.437 Submission Queue Entry Size 00:11:35.437 Max: 64 00:11:35.437 Min: 64 00:11:35.437 Completion Queue Entry Size 00:11:35.437 Max: 16 00:11:35.437 Min: 16 00:11:35.437 Number of Namespaces: 32 00:11:35.437 Compare Command: Supported 00:11:35.438 Write Uncorrectable Command: Not Supported 00:11:35.438 Dataset Management Command: Supported 00:11:35.438 Write Zeroes Command: Supported 00:11:35.438 Set Features Save Field: Not Supported 00:11:35.438 Reservations: Not Supported 00:11:35.438 Timestamp: Not Supported 00:11:35.438 Copy: Supported 00:11:35.438 Volatile Write Cache: Present 00:11:35.438 Atomic Write Unit (Normal): 1 00:11:35.438 Atomic Write Unit (PFail): 1 00:11:35.438 Atomic Compare & Write Unit: 1 00:11:35.438 Fused Compare & Write: Supported 00:11:35.438 Scatter-Gather List 00:11:35.438 SGL Command Set: Supported (Dword aligned) 00:11:35.438 SGL Keyed: Not Supported 00:11:35.438 SGL Bit Bucket Descriptor: Not Supported 00:11:35.438 SGL Metadata Pointer: Not Supported 00:11:35.438 Oversized SGL: Not Supported 00:11:35.438 SGL Metadata Address: Not Supported 00:11:35.438 SGL Offset: Not Supported 00:11:35.438 Transport SGL Data Block: Not Supported 00:11:35.438 Replay Protected Memory Block: Not Supported 00:11:35.438 00:11:35.438 Firmware Slot Information 00:11:35.438 ========================= 00:11:35.438 Active slot: 1 00:11:35.438 Slot 1 Firmware Revision: 24.09 00:11:35.438 00:11:35.438 00:11:35.438 Commands Supported and Effects 00:11:35.438 ============================== 00:11:35.438 Admin Commands 00:11:35.438 -------------- 00:11:35.438 Get Log Page (02h): Supported 00:11:35.438 Identify (06h): Supported 00:11:35.438 Abort (08h): Supported 00:11:35.438 Set Features (09h): Supported 00:11:35.438 Get Features (0Ah): Supported 00:11:35.438 Asynchronous Event Request (0Ch): Supported 00:11:35.438 Keep Alive (18h): Supported 00:11:35.438 I/O Commands 00:11:35.438 ------------ 00:11:35.438 Flush (00h): Supported LBA-Change 00:11:35.438 Write (01h): Supported LBA-Change 00:11:35.438 Read (02h): Supported 00:11:35.438 Compare (05h): Supported 00:11:35.438 Write Zeroes (08h): Supported LBA-Change 00:11:35.438 Dataset Management (09h): Supported LBA-Change 00:11:35.438 Copy (19h): Supported LBA-Change 00:11:35.438 00:11:35.438 Error Log 00:11:35.438 ========= 00:11:35.438 00:11:35.438 Arbitration 00:11:35.438 =========== 00:11:35.438 Arbitration Burst: 1 00:11:35.438 00:11:35.438 Power Management 00:11:35.438 ================ 00:11:35.438 Number of Power States: 1 00:11:35.438 Current Power State: Power State #0 00:11:35.438 Power State #0: 00:11:35.438 Max Power: 0.00 W 00:11:35.438 Non-Operational State: Operational 00:11:35.438 Entry Latency: Not Reported 00:11:35.438 Exit Latency: Not Reported 00:11:35.438 Relative Read Throughput: 0 00:11:35.438 Relative Read Latency: 0 00:11:35.438 Relative Write Throughput: 0 00:11:35.438 Relative Write Latency: 0 00:11:35.438 Idle Power: Not Reported 00:11:35.438 Active Power: Not Reported 00:11:35.438 Non-Operational Permissive Mode: Not Supported 00:11:35.438 00:11:35.438 Health Information 00:11:35.438 ================== 00:11:35.438 Critical Warnings: 00:11:35.438 Available Spare Space: OK 00:11:35.438 Temperature: OK 00:11:35.438 Device Reliability: OK 00:11:35.438 Read Only: No 00:11:35.438 Volatile Memory Backup: OK 00:11:35.438 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:35.438 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:35.438 Available Spare: 0% 00:11:35.438 Available Sp[2024-07-15 18:26:20.838631] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:35.438 [2024-07-15 18:26:20.838640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:35.438 [2024-07-15 18:26:20.838664] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:11:35.438 [2024-07-15 18:26:20.838672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.438 [2024-07-15 18:26:20.838678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.438 [2024-07-15 18:26:20.838684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.438 [2024-07-15 18:26:20.838689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.438 [2024-07-15 18:26:20.838849] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:35.438 [2024-07-15 18:26:20.838858] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:11:35.438 [2024-07-15 18:26:20.839849] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:35.438 [2024-07-15 18:26:20.839902] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:11:35.438 [2024-07-15 18:26:20.839908] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:11:35.438 [2024-07-15 18:26:20.840857] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:11:35.438 [2024-07-15 18:26:20.840867] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:11:35.438 [2024-07-15 18:26:20.840913] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:11:35.438 [2024-07-15 18:26:20.843344] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:35.438 are Threshold: 0% 00:11:35.438 Life Percentage Used: 0% 00:11:35.438 Data Units Read: 0 00:11:35.438 Data Units Written: 0 00:11:35.438 Host Read Commands: 0 00:11:35.438 Host Write Commands: 0 00:11:35.438 Controller Busy Time: 0 minutes 00:11:35.438 Power Cycles: 0 00:11:35.438 Power On Hours: 0 hours 00:11:35.438 Unsafe Shutdowns: 0 00:11:35.438 Unrecoverable Media Errors: 0 00:11:35.438 Lifetime Error Log Entries: 0 00:11:35.438 Warning Temperature Time: 0 minutes 00:11:35.438 Critical Temperature Time: 0 minutes 00:11:35.438 00:11:35.438 Number of Queues 00:11:35.438 ================ 00:11:35.438 Number of I/O Submission Queues: 127 00:11:35.438 Number of I/O Completion Queues: 127 00:11:35.438 00:11:35.438 Active Namespaces 00:11:35.438 ================= 00:11:35.438 Namespace ID:1 00:11:35.438 Error Recovery Timeout: Unlimited 00:11:35.438 Command Set Identifier: NVM (00h) 00:11:35.438 Deallocate: Supported 00:11:35.438 Deallocated/Unwritten Error: Not Supported 00:11:35.438 Deallocated Read Value: Unknown 00:11:35.438 Deallocate in Write Zeroes: Not Supported 00:11:35.438 Deallocated Guard Field: 0xFFFF 00:11:35.438 Flush: Supported 00:11:35.438 Reservation: Supported 00:11:35.438 Namespace Sharing Capabilities: Multiple Controllers 00:11:35.438 Size (in LBAs): 131072 (0GiB) 00:11:35.438 Capacity (in LBAs): 131072 (0GiB) 00:11:35.438 Utilization (in LBAs): 131072 (0GiB) 00:11:35.438 NGUID: E2DD58CEBA894C649F65B3EDAEFBCEC6 00:11:35.438 UUID: e2dd58ce-ba89-4c64-9f65-b3edaefbcec6 00:11:35.438 Thin Provisioning: Not Supported 00:11:35.438 Per-NS Atomic Units: Yes 00:11:35.438 Atomic Boundary Size (Normal): 0 00:11:35.438 Atomic Boundary Size (PFail): 0 00:11:35.438 Atomic Boundary Offset: 0 00:11:35.438 Maximum Single Source Range Length: 65535 00:11:35.438 Maximum Copy Length: 65535 00:11:35.438 Maximum Source Range Count: 1 00:11:35.438 NGUID/EUI64 Never Reused: No 00:11:35.438 Namespace Write Protected: No 00:11:35.438 Number of LBA Formats: 1 00:11:35.438 Current LBA Format: LBA Format #00 00:11:35.438 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:35.438 00:11:35.438 18:26:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:35.438 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.696 [2024-07-15 18:26:21.053857] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:40.962 Initializing NVMe Controllers 00:11:40.962 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:40.962 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:40.962 Initialization complete. Launching workers. 00:11:40.962 ======================================================== 00:11:40.962 Latency(us) 00:11:40.962 Device Information : IOPS MiB/s Average min max 00:11:40.962 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39935.36 156.00 3205.00 926.00 6694.90 00:11:40.962 ======================================================== 00:11:40.962 Total : 39935.36 156.00 3205.00 926.00 6694.90 00:11:40.962 00:11:40.962 [2024-07-15 18:26:26.073826] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:40.962 18:26:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:40.962 EAL: No free 2048 kB hugepages reported on node 1 00:11:40.962 [2024-07-15 18:26:26.290858] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:46.226 Initializing NVMe Controllers 00:11:46.226 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:46.226 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:46.226 Initialization complete. Launching workers. 00:11:46.226 ======================================================== 00:11:46.226 Latency(us) 00:11:46.226 Device Information : IOPS MiB/s Average min max 00:11:46.226 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16050.20 62.70 7980.34 5991.76 15425.23 00:11:46.226 ======================================================== 00:11:46.226 Total : 16050.20 62.70 7980.34 5991.76 15425.23 00:11:46.226 00:11:46.226 [2024-07-15 18:26:31.330773] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:46.226 18:26:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:46.226 EAL: No free 2048 kB hugepages reported on node 1 00:11:46.226 [2024-07-15 18:26:31.523688] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:51.494 [2024-07-15 18:26:36.622711] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:51.494 Initializing NVMe Controllers 00:11:51.494 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:51.494 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:51.494 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:11:51.494 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:11:51.494 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:11:51.494 Initialization complete. Launching workers. 00:11:51.494 Starting thread on core 2 00:11:51.494 Starting thread on core 3 00:11:51.494 Starting thread on core 1 00:11:51.494 18:26:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:11:51.494 EAL: No free 2048 kB hugepages reported on node 1 00:11:51.494 [2024-07-15 18:26:36.904751] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:54.779 [2024-07-15 18:26:39.976659] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:54.779 Initializing NVMe Controllers 00:11:54.779 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:54.779 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:54.779 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:11:54.779 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:11:54.779 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:11:54.779 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:11:54.779 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:54.779 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:54.779 Initialization complete. Launching workers. 00:11:54.779 Starting thread on core 1 with urgent priority queue 00:11:54.779 Starting thread on core 2 with urgent priority queue 00:11:54.779 Starting thread on core 3 with urgent priority queue 00:11:54.779 Starting thread on core 0 with urgent priority queue 00:11:54.779 SPDK bdev Controller (SPDK1 ) core 0: 8562.00 IO/s 11.68 secs/100000 ios 00:11:54.779 SPDK bdev Controller (SPDK1 ) core 1: 8238.67 IO/s 12.14 secs/100000 ios 00:11:54.779 SPDK bdev Controller (SPDK1 ) core 2: 10567.33 IO/s 9.46 secs/100000 ios 00:11:54.779 SPDK bdev Controller (SPDK1 ) core 3: 7553.00 IO/s 13.24 secs/100000 ios 00:11:54.779 ======================================================== 00:11:54.779 00:11:54.779 18:26:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:54.779 EAL: No free 2048 kB hugepages reported on node 1 00:11:54.779 [2024-07-15 18:26:40.251785] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:54.779 Initializing NVMe Controllers 00:11:54.779 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:54.779 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:54.779 Namespace ID: 1 size: 0GB 00:11:54.779 Initialization complete. 00:11:54.779 INFO: using host memory buffer for IO 00:11:54.779 Hello world! 00:11:54.779 [2024-07-15 18:26:40.286008] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:54.779 18:26:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:55.037 EAL: No free 2048 kB hugepages reported on node 1 00:11:55.037 [2024-07-15 18:26:40.549654] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:56.412 Initializing NVMe Controllers 00:11:56.412 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:56.412 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:56.412 Initialization complete. Launching workers. 00:11:56.412 submit (in ns) avg, min, max = 5804.8, 3181.9, 4000884.8 00:11:56.412 complete (in ns) avg, min, max = 18502.7, 1754.3, 6010560.0 00:11:56.412 00:11:56.412 Submit histogram 00:11:56.412 ================ 00:11:56.412 Range in us Cumulative Count 00:11:56.412 3.170 - 3.185: 0.0060% ( 1) 00:11:56.412 3.185 - 3.200: 0.0299% ( 4) 00:11:56.412 3.200 - 3.215: 0.3589% ( 55) 00:11:56.412 3.215 - 3.230: 1.1665% ( 135) 00:11:56.412 3.230 - 3.246: 2.0219% ( 143) 00:11:56.412 3.246 - 3.261: 4.3010% ( 381) 00:11:56.412 3.261 - 3.276: 9.7266% ( 907) 00:11:56.412 3.276 - 3.291: 15.6487% ( 990) 00:11:56.412 3.291 - 3.307: 22.2468% ( 1103) 00:11:56.412 3.307 - 3.322: 29.2038% ( 1163) 00:11:56.412 3.322 - 3.337: 35.0601% ( 979) 00:11:56.412 3.337 - 3.352: 40.8327% ( 965) 00:11:56.412 3.352 - 3.368: 46.8685% ( 1009) 00:11:56.412 3.368 - 3.383: 52.7427% ( 982) 00:11:56.412 3.383 - 3.398: 58.0128% ( 881) 00:11:56.412 3.398 - 3.413: 64.4075% ( 1069) 00:11:56.412 3.413 - 3.429: 71.8670% ( 1247) 00:11:56.412 3.429 - 3.444: 76.7542% ( 817) 00:11:56.412 3.444 - 3.459: 80.8100% ( 678) 00:11:56.412 3.459 - 3.474: 83.7112% ( 485) 00:11:56.412 3.474 - 3.490: 85.9365% ( 372) 00:11:56.412 3.490 - 3.505: 87.1867% ( 209) 00:11:56.412 3.505 - 3.520: 87.6653% ( 80) 00:11:56.412 3.520 - 3.535: 88.0900% ( 71) 00:11:56.412 3.535 - 3.550: 88.4788% ( 65) 00:11:56.412 3.550 - 3.566: 89.1667% ( 115) 00:11:56.412 3.566 - 3.581: 89.9444% ( 130) 00:11:56.412 3.581 - 3.596: 91.0032% ( 177) 00:11:56.413 3.596 - 3.611: 91.9782% ( 163) 00:11:56.413 3.611 - 3.627: 93.0011% ( 171) 00:11:56.413 3.627 - 3.642: 93.9762% ( 163) 00:11:56.413 3.642 - 3.657: 94.8077% ( 139) 00:11:56.413 3.657 - 3.672: 95.7169% ( 152) 00:11:56.413 3.672 - 3.688: 96.6441% ( 155) 00:11:56.413 3.688 - 3.703: 97.4038% ( 127) 00:11:56.413 3.703 - 3.718: 98.0200% ( 103) 00:11:56.413 3.718 - 3.733: 98.4806% ( 77) 00:11:56.413 3.733 - 3.749: 98.8395% ( 60) 00:11:56.413 3.749 - 3.764: 99.1266% ( 48) 00:11:56.413 3.764 - 3.779: 99.3719% ( 41) 00:11:56.413 3.779 - 3.794: 99.4676% ( 16) 00:11:56.413 3.794 - 3.810: 99.5693% ( 17) 00:11:56.413 3.810 - 3.825: 99.6052% ( 6) 00:11:56.413 3.825 - 3.840: 99.6291% ( 4) 00:11:56.413 3.840 - 3.855: 99.6590% ( 5) 00:11:56.413 3.855 - 3.870: 99.6650% ( 1) 00:11:56.413 5.486 - 5.516: 99.6710% ( 1) 00:11:56.413 5.547 - 5.577: 99.6770% ( 1) 00:11:56.413 6.004 - 6.034: 99.6830% ( 1) 00:11:56.413 6.034 - 6.065: 99.6889% ( 1) 00:11:56.413 6.217 - 6.248: 99.6949% ( 1) 00:11:56.413 6.309 - 6.339: 99.7009% ( 1) 00:11:56.413 6.430 - 6.461: 99.7069% ( 1) 00:11:56.413 6.552 - 6.583: 99.7129% ( 1) 00:11:56.413 6.583 - 6.613: 99.7188% ( 1) 00:11:56.413 6.735 - 6.766: 99.7248% ( 1) 00:11:56.413 6.796 - 6.827: 99.7308% ( 1) 00:11:56.413 6.888 - 6.918: 99.7368% ( 1) 00:11:56.413 6.918 - 6.949: 99.7428% ( 1) 00:11:56.413 7.040 - 7.070: 99.7547% ( 2) 00:11:56.413 7.070 - 7.101: 99.7667% ( 2) 00:11:56.413 7.131 - 7.162: 99.7727% ( 1) 00:11:56.413 7.162 - 7.192: 99.7787% ( 1) 00:11:56.413 7.253 - 7.284: 99.7847% ( 1) 00:11:56.413 7.314 - 7.345: 99.7906% ( 1) 00:11:56.413 7.345 - 7.375: 99.7966% ( 1) 00:11:56.413 7.375 - 7.406: 99.8086% ( 2) 00:11:56.413 7.406 - 7.436: 99.8146% ( 1) 00:11:56.413 7.467 - 7.497: 99.8325% ( 3) 00:11:56.413 7.680 - 7.710: 99.8385% ( 1) 00:11:56.413 7.863 - 7.924: 99.8505% ( 2) 00:11:56.413 7.924 - 7.985: 99.8624% ( 2) 00:11:56.413 8.046 - 8.107: 99.8744% ( 2) 00:11:56.413 8.107 - 8.168: 99.8804% ( 1) 00:11:56.413 8.168 - 8.229: 99.8983% ( 3) 00:11:56.413 8.533 - 8.594: 99.9043% ( 1) 00:11:56.413 8.594 - 8.655: 99.9103% ( 1) 00:11:56.413 8.716 - 8.777: 99.9163% ( 1) 00:11:56.413 9.143 - 9.204: 99.9222% ( 1) 00:11:56.413 [2024-07-15 18:26:41.571557] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:56.413 9.448 - 9.509: 99.9282% ( 1) 00:11:56.413 10.545 - 10.606: 99.9342% ( 1) 00:11:56.413 155.063 - 156.038: 99.9402% ( 1) 00:11:56.413 3994.575 - 4025.783: 100.0000% ( 10) 00:11:56.413 00:11:56.413 Complete histogram 00:11:56.413 ================== 00:11:56.413 Range in us Cumulative Count 00:11:56.413 1.752 - 1.760: 0.1316% ( 22) 00:11:56.413 1.760 - 1.768: 1.1605% ( 172) 00:11:56.413 1.768 - 1.775: 3.8464% ( 449) 00:11:56.413 1.775 - 1.783: 6.5083% ( 445) 00:11:56.413 1.783 - 1.790: 7.8543% ( 225) 00:11:56.413 1.790 - 1.798: 8.7456% ( 149) 00:11:56.413 1.798 - 1.806: 10.0138% ( 212) 00:11:56.413 1.806 - 1.813: 18.0834% ( 1349) 00:11:56.413 1.813 - 1.821: 42.6512% ( 4107) 00:11:56.413 1.821 - 1.829: 70.3535% ( 4631) 00:11:56.413 1.829 - 1.836: 86.4808% ( 2696) 00:11:56.413 1.836 - 1.844: 91.8706% ( 901) 00:11:56.413 1.844 - 1.851: 94.2334% ( 395) 00:11:56.413 1.851 - 1.859: 95.4896% ( 210) 00:11:56.413 1.859 - 1.867: 96.0280% ( 90) 00:11:56.413 1.867 - 1.874: 96.3211% ( 49) 00:11:56.413 1.874 - 1.882: 96.6082% ( 48) 00:11:56.413 1.882 - 1.890: 97.1705% ( 94) 00:11:56.413 1.890 - 1.897: 97.9362% ( 128) 00:11:56.413 1.897 - 1.905: 98.5105% ( 96) 00:11:56.413 1.905 - 1.912: 98.8933% ( 64) 00:11:56.413 1.912 - 1.920: 99.0489% ( 26) 00:11:56.413 1.920 - 1.928: 99.1266% ( 13) 00:11:56.413 1.928 - 1.935: 99.1745% ( 8) 00:11:56.413 1.935 - 1.943: 99.2044% ( 5) 00:11:56.413 1.943 - 1.950: 99.2283% ( 4) 00:11:56.413 1.950 - 1.966: 99.2822% ( 9) 00:11:56.413 1.966 - 1.981: 99.2941% ( 2) 00:11:56.413 1.981 - 1.996: 99.3121% ( 3) 00:11:56.413 1.996 - 2.011: 99.3181% ( 1) 00:11:56.413 2.011 - 2.027: 99.3300% ( 2) 00:11:56.413 2.027 - 2.042: 99.3420% ( 2) 00:11:56.413 2.042 - 2.057: 99.3480% ( 1) 00:11:56.413 2.210 - 2.225: 99.3540% ( 1) 00:11:56.413 2.270 - 2.286: 99.3599% ( 1) 00:11:56.413 2.347 - 2.362: 99.3659% ( 1) 00:11:56.413 3.962 - 3.992: 99.3719% ( 1) 00:11:56.413 4.236 - 4.267: 99.3779% ( 1) 00:11:56.413 4.510 - 4.541: 99.3839% ( 1) 00:11:56.413 4.602 - 4.632: 99.3958% ( 2) 00:11:56.413 4.693 - 4.724: 99.4018% ( 1) 00:11:56.413 4.785 - 4.815: 99.4078% ( 1) 00:11:56.413 4.815 - 4.846: 99.4138% ( 1) 00:11:56.413 4.937 - 4.968: 99.4198% ( 1) 00:11:56.413 5.029 - 5.059: 99.4257% ( 1) 00:11:56.413 5.059 - 5.090: 99.4317% ( 1) 00:11:56.413 5.090 - 5.120: 99.4377% ( 1) 00:11:56.413 5.120 - 5.150: 99.4437% ( 1) 00:11:56.413 5.425 - 5.455: 99.4497% ( 1) 00:11:56.413 5.486 - 5.516: 99.4556% ( 1) 00:11:56.413 5.669 - 5.699: 99.4616% ( 1) 00:11:56.413 5.699 - 5.730: 99.4676% ( 1) 00:11:56.413 5.730 - 5.760: 99.4796% ( 2) 00:11:56.413 5.760 - 5.790: 99.4856% ( 1) 00:11:56.413 5.943 - 5.973: 99.4975% ( 2) 00:11:56.413 5.973 - 6.004: 99.5035% ( 1) 00:11:56.413 6.004 - 6.034: 99.5095% ( 1) 00:11:56.413 6.065 - 6.095: 99.5155% ( 1) 00:11:56.413 6.156 - 6.187: 99.5274% ( 2) 00:11:56.413 6.278 - 6.309: 99.5334% ( 1) 00:11:56.413 6.674 - 6.705: 99.5394% ( 1) 00:11:56.413 6.735 - 6.766: 99.5454% ( 1) 00:11:56.413 6.918 - 6.949: 99.5514% ( 1) 00:11:56.413 7.010 - 7.040: 99.5573% ( 1) 00:11:56.413 7.375 - 7.406: 99.5633% ( 1) 00:11:56.413 7.802 - 7.863: 99.5693% ( 1) 00:11:56.413 7.985 - 8.046: 99.5753% ( 1) 00:11:56.413 8.107 - 8.168: 99.5813% ( 1) 00:11:56.413 38.278 - 38.522: 99.5872% ( 1) 00:11:56.413 3994.575 - 4025.783: 99.9880% ( 67) 00:11:56.413 4993.219 - 5024.427: 99.9940% ( 1) 00:11:56.413 5991.863 - 6023.070: 100.0000% ( 1) 00:11:56.413 00:11:56.413 18:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:11:56.413 18:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:56.413 18:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:11:56.413 18:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:11:56.413 18:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:56.413 [ 00:11:56.413 { 00:11:56.413 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:56.413 "subtype": "Discovery", 00:11:56.413 "listen_addresses": [], 00:11:56.413 "allow_any_host": true, 00:11:56.413 "hosts": [] 00:11:56.413 }, 00:11:56.413 { 00:11:56.413 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:56.413 "subtype": "NVMe", 00:11:56.413 "listen_addresses": [ 00:11:56.413 { 00:11:56.413 "trtype": "VFIOUSER", 00:11:56.413 "adrfam": "IPv4", 00:11:56.413 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:56.413 "trsvcid": "0" 00:11:56.413 } 00:11:56.413 ], 00:11:56.413 "allow_any_host": true, 00:11:56.413 "hosts": [], 00:11:56.413 "serial_number": "SPDK1", 00:11:56.413 "model_number": "SPDK bdev Controller", 00:11:56.413 "max_namespaces": 32, 00:11:56.413 "min_cntlid": 1, 00:11:56.413 "max_cntlid": 65519, 00:11:56.413 "namespaces": [ 00:11:56.413 { 00:11:56.413 "nsid": 1, 00:11:56.413 "bdev_name": "Malloc1", 00:11:56.413 "name": "Malloc1", 00:11:56.413 "nguid": "E2DD58CEBA894C649F65B3EDAEFBCEC6", 00:11:56.413 "uuid": "e2dd58ce-ba89-4c64-9f65-b3edaefbcec6" 00:11:56.413 } 00:11:56.413 ] 00:11:56.413 }, 00:11:56.413 { 00:11:56.413 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:56.413 "subtype": "NVMe", 00:11:56.413 "listen_addresses": [ 00:11:56.413 { 00:11:56.413 "trtype": "VFIOUSER", 00:11:56.413 "adrfam": "IPv4", 00:11:56.413 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:56.413 "trsvcid": "0" 00:11:56.413 } 00:11:56.413 ], 00:11:56.413 "allow_any_host": true, 00:11:56.413 "hosts": [], 00:11:56.413 "serial_number": "SPDK2", 00:11:56.413 "model_number": "SPDK bdev Controller", 00:11:56.413 "max_namespaces": 32, 00:11:56.413 "min_cntlid": 1, 00:11:56.413 "max_cntlid": 65519, 00:11:56.413 "namespaces": [ 00:11:56.413 { 00:11:56.413 "nsid": 1, 00:11:56.413 "bdev_name": "Malloc2", 00:11:56.413 "name": "Malloc2", 00:11:56.413 "nguid": "D9494AC534A242BAAAEF97D7AB4F459A", 00:11:56.413 "uuid": "d9494ac5-34a2-42ba-aaef-97d7ab4f459a" 00:11:56.413 } 00:11:56.413 ] 00:11:56.413 } 00:11:56.413 ] 00:11:56.413 18:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:56.413 18:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3841782 00:11:56.414 18:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:56.414 18:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:11:56.414 18:26:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:11:56.414 18:26:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:56.414 18:26:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:56.414 18:26:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:11:56.414 18:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:56.414 18:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:11:56.414 EAL: No free 2048 kB hugepages reported on node 1 00:11:56.414 [2024-07-15 18:26:41.929629] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:56.671 Malloc3 00:11:56.672 18:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:11:56.672 [2024-07-15 18:26:42.140165] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:56.672 18:26:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:56.672 Asynchronous Event Request test 00:11:56.672 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:56.672 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:56.672 Registering asynchronous event callbacks... 00:11:56.672 Starting namespace attribute notice tests for all controllers... 00:11:56.672 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:56.672 aer_cb - Changed Namespace 00:11:56.672 Cleaning up... 00:11:56.931 [ 00:11:56.931 { 00:11:56.931 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:56.931 "subtype": "Discovery", 00:11:56.931 "listen_addresses": [], 00:11:56.931 "allow_any_host": true, 00:11:56.931 "hosts": [] 00:11:56.931 }, 00:11:56.931 { 00:11:56.931 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:56.931 "subtype": "NVMe", 00:11:56.931 "listen_addresses": [ 00:11:56.931 { 00:11:56.931 "trtype": "VFIOUSER", 00:11:56.931 "adrfam": "IPv4", 00:11:56.931 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:56.931 "trsvcid": "0" 00:11:56.931 } 00:11:56.931 ], 00:11:56.931 "allow_any_host": true, 00:11:56.931 "hosts": [], 00:11:56.931 "serial_number": "SPDK1", 00:11:56.931 "model_number": "SPDK bdev Controller", 00:11:56.931 "max_namespaces": 32, 00:11:56.931 "min_cntlid": 1, 00:11:56.931 "max_cntlid": 65519, 00:11:56.931 "namespaces": [ 00:11:56.931 { 00:11:56.931 "nsid": 1, 00:11:56.931 "bdev_name": "Malloc1", 00:11:56.931 "name": "Malloc1", 00:11:56.931 "nguid": "E2DD58CEBA894C649F65B3EDAEFBCEC6", 00:11:56.931 "uuid": "e2dd58ce-ba89-4c64-9f65-b3edaefbcec6" 00:11:56.931 }, 00:11:56.931 { 00:11:56.931 "nsid": 2, 00:11:56.931 "bdev_name": "Malloc3", 00:11:56.931 "name": "Malloc3", 00:11:56.931 "nguid": "FF57EAE599894C09A1251B89A5F80030", 00:11:56.931 "uuid": "ff57eae5-9989-4c09-a125-1b89a5f80030" 00:11:56.931 } 00:11:56.931 ] 00:11:56.931 }, 00:11:56.931 { 00:11:56.931 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:56.931 "subtype": "NVMe", 00:11:56.931 "listen_addresses": [ 00:11:56.931 { 00:11:56.931 "trtype": "VFIOUSER", 00:11:56.931 "adrfam": "IPv4", 00:11:56.931 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:56.931 "trsvcid": "0" 00:11:56.931 } 00:11:56.931 ], 00:11:56.931 "allow_any_host": true, 00:11:56.931 "hosts": [], 00:11:56.931 "serial_number": "SPDK2", 00:11:56.931 "model_number": "SPDK bdev Controller", 00:11:56.931 "max_namespaces": 32, 00:11:56.931 "min_cntlid": 1, 00:11:56.931 "max_cntlid": 65519, 00:11:56.931 "namespaces": [ 00:11:56.931 { 00:11:56.931 "nsid": 1, 00:11:56.931 "bdev_name": "Malloc2", 00:11:56.931 "name": "Malloc2", 00:11:56.931 "nguid": "D9494AC534A242BAAAEF97D7AB4F459A", 00:11:56.931 "uuid": "d9494ac5-34a2-42ba-aaef-97d7ab4f459a" 00:11:56.931 } 00:11:56.931 ] 00:11:56.931 } 00:11:56.931 ] 00:11:56.931 18:26:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3841782 00:11:56.931 18:26:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:56.931 18:26:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:56.931 18:26:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:11:56.931 18:26:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:56.931 [2024-07-15 18:26:42.354040] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:11:56.931 [2024-07-15 18:26:42.354066] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3841795 ] 00:11:56.931 EAL: No free 2048 kB hugepages reported on node 1 00:11:56.931 [2024-07-15 18:26:42.380500] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:11:56.931 [2024-07-15 18:26:42.383424] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:56.931 [2024-07-15 18:26:42.383445] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f1d7de37000 00:11:56.931 [2024-07-15 18:26:42.384427] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:56.931 [2024-07-15 18:26:42.385439] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:56.931 [2024-07-15 18:26:42.386438] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:56.931 [2024-07-15 18:26:42.387444] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:56.931 [2024-07-15 18:26:42.388454] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:56.931 [2024-07-15 18:26:42.389459] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:56.931 [2024-07-15 18:26:42.390467] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:56.931 [2024-07-15 18:26:42.391469] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:56.931 [2024-07-15 18:26:42.392473] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:56.931 [2024-07-15 18:26:42.392483] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f1d7de2c000 00:11:56.931 [2024-07-15 18:26:42.393399] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:56.931 [2024-07-15 18:26:42.405767] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:11:56.931 [2024-07-15 18:26:42.405789] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:11:56.931 [2024-07-15 18:26:42.407858] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:56.931 [2024-07-15 18:26:42.407894] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:56.931 [2024-07-15 18:26:42.407960] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:11:56.931 [2024-07-15 18:26:42.407975] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:11:56.931 [2024-07-15 18:26:42.407979] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:11:56.931 [2024-07-15 18:26:42.408860] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:11:56.931 [2024-07-15 18:26:42.408870] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:11:56.931 [2024-07-15 18:26:42.408875] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:11:56.931 [2024-07-15 18:26:42.409864] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:56.931 [2024-07-15 18:26:42.409873] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:11:56.931 [2024-07-15 18:26:42.409879] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:11:56.931 [2024-07-15 18:26:42.410873] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:11:56.931 [2024-07-15 18:26:42.410881] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:56.931 [2024-07-15 18:26:42.411882] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:11:56.931 [2024-07-15 18:26:42.411890] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:11:56.931 [2024-07-15 18:26:42.411894] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:11:56.931 [2024-07-15 18:26:42.411900] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:56.931 [2024-07-15 18:26:42.412005] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:11:56.931 [2024-07-15 18:26:42.412009] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:56.931 [2024-07-15 18:26:42.412013] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:11:56.931 [2024-07-15 18:26:42.412886] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:11:56.931 [2024-07-15 18:26:42.413890] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:11:56.931 [2024-07-15 18:26:42.414896] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:56.932 [2024-07-15 18:26:42.415894] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:56.932 [2024-07-15 18:26:42.415932] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:56.932 [2024-07-15 18:26:42.416911] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:11:56.932 [2024-07-15 18:26:42.416920] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:56.932 [2024-07-15 18:26:42.416924] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:11:56.932 [2024-07-15 18:26:42.416940] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:11:56.932 [2024-07-15 18:26:42.416947] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:11:56.932 [2024-07-15 18:26:42.416958] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:56.932 [2024-07-15 18:26:42.416962] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:56.932 [2024-07-15 18:26:42.416973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:56.932 [2024-07-15 18:26:42.423346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:56.932 [2024-07-15 18:26:42.423357] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:11:56.932 [2024-07-15 18:26:42.423363] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:11:56.932 [2024-07-15 18:26:42.423367] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:11:56.932 [2024-07-15 18:26:42.423371] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:56.932 [2024-07-15 18:26:42.423375] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:11:56.932 [2024-07-15 18:26:42.423379] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:11:56.932 [2024-07-15 18:26:42.423383] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:11:56.932 [2024-07-15 18:26:42.423389] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:11:56.932 [2024-07-15 18:26:42.423399] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:56.932 [2024-07-15 18:26:42.431344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:56.932 [2024-07-15 18:26:42.431357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.932 [2024-07-15 18:26:42.431365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.932 [2024-07-15 18:26:42.431372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.932 [2024-07-15 18:26:42.431379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.932 [2024-07-15 18:26:42.431383] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:11:56.932 [2024-07-15 18:26:42.431390] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:56.932 [2024-07-15 18:26:42.431400] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:56.932 [2024-07-15 18:26:42.439342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:56.932 [2024-07-15 18:26:42.439349] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:11:56.932 [2024-07-15 18:26:42.439353] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:56.932 [2024-07-15 18:26:42.439358] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:11:56.932 [2024-07-15 18:26:42.439363] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:11:56.932 [2024-07-15 18:26:42.439371] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:56.932 [2024-07-15 18:26:42.447344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:56.932 [2024-07-15 18:26:42.447409] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:11:56.932 [2024-07-15 18:26:42.447416] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:11:56.932 [2024-07-15 18:26:42.447422] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:56.932 [2024-07-15 18:26:42.447426] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:56.932 [2024-07-15 18:26:42.447432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:56.932 [2024-07-15 18:26:42.455342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:56.932 [2024-07-15 18:26:42.455352] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:11:56.932 [2024-07-15 18:26:42.455360] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:11:56.932 [2024-07-15 18:26:42.455366] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:11:56.932 [2024-07-15 18:26:42.455373] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:56.932 [2024-07-15 18:26:42.455377] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:56.932 [2024-07-15 18:26:42.455382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:56.932 [2024-07-15 18:26:42.463342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:56.932 [2024-07-15 18:26:42.463355] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:56.932 [2024-07-15 18:26:42.463362] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:56.932 [2024-07-15 18:26:42.463369] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:56.932 [2024-07-15 18:26:42.463372] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:56.932 [2024-07-15 18:26:42.463379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:56.932 [2024-07-15 18:26:42.471343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:56.932 [2024-07-15 18:26:42.471353] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:56.932 [2024-07-15 18:26:42.471358] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:11:56.932 [2024-07-15 18:26:42.471367] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:11:56.932 [2024-07-15 18:26:42.471372] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:11:56.932 [2024-07-15 18:26:42.471376] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:56.932 [2024-07-15 18:26:42.471381] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:11:56.932 [2024-07-15 18:26:42.471385] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:11:56.932 [2024-07-15 18:26:42.471389] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:11:56.932 [2024-07-15 18:26:42.471393] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:11:56.932 [2024-07-15 18:26:42.471408] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:56.932 [2024-07-15 18:26:42.479342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:56.932 [2024-07-15 18:26:42.479356] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:56.932 [2024-07-15 18:26:42.487342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:56.932 [2024-07-15 18:26:42.487354] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:57.190 [2024-07-15 18:26:42.495342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:57.190 [2024-07-15 18:26:42.495354] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:57.190 [2024-07-15 18:26:42.503344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:57.190 [2024-07-15 18:26:42.503359] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:57.190 [2024-07-15 18:26:42.503364] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:57.190 [2024-07-15 18:26:42.503367] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:57.190 [2024-07-15 18:26:42.503370] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:57.190 [2024-07-15 18:26:42.503376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:57.190 [2024-07-15 18:26:42.503382] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:57.190 [2024-07-15 18:26:42.503386] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:57.190 [2024-07-15 18:26:42.503393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:57.190 [2024-07-15 18:26:42.503399] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:57.190 [2024-07-15 18:26:42.503403] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:57.190 [2024-07-15 18:26:42.503409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:57.191 [2024-07-15 18:26:42.503415] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:57.191 [2024-07-15 18:26:42.503419] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:57.191 [2024-07-15 18:26:42.503424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:57.191 [2024-07-15 18:26:42.511343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:57.191 [2024-07-15 18:26:42.511356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:57.191 [2024-07-15 18:26:42.511365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:57.191 [2024-07-15 18:26:42.511371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:57.191 ===================================================== 00:11:57.191 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:57.191 ===================================================== 00:11:57.191 Controller Capabilities/Features 00:11:57.191 ================================ 00:11:57.191 Vendor ID: 4e58 00:11:57.191 Subsystem Vendor ID: 4e58 00:11:57.191 Serial Number: SPDK2 00:11:57.191 Model Number: SPDK bdev Controller 00:11:57.191 Firmware Version: 24.09 00:11:57.191 Recommended Arb Burst: 6 00:11:57.191 IEEE OUI Identifier: 8d 6b 50 00:11:57.191 Multi-path I/O 00:11:57.191 May have multiple subsystem ports: Yes 00:11:57.191 May have multiple controllers: Yes 00:11:57.191 Associated with SR-IOV VF: No 00:11:57.191 Max Data Transfer Size: 131072 00:11:57.191 Max Number of Namespaces: 32 00:11:57.191 Max Number of I/O Queues: 127 00:11:57.191 NVMe Specification Version (VS): 1.3 00:11:57.191 NVMe Specification Version (Identify): 1.3 00:11:57.191 Maximum Queue Entries: 256 00:11:57.191 Contiguous Queues Required: Yes 00:11:57.191 Arbitration Mechanisms Supported 00:11:57.191 Weighted Round Robin: Not Supported 00:11:57.191 Vendor Specific: Not Supported 00:11:57.191 Reset Timeout: 15000 ms 00:11:57.191 Doorbell Stride: 4 bytes 00:11:57.191 NVM Subsystem Reset: Not Supported 00:11:57.191 Command Sets Supported 00:11:57.191 NVM Command Set: Supported 00:11:57.191 Boot Partition: Not Supported 00:11:57.191 Memory Page Size Minimum: 4096 bytes 00:11:57.191 Memory Page Size Maximum: 4096 bytes 00:11:57.191 Persistent Memory Region: Not Supported 00:11:57.191 Optional Asynchronous Events Supported 00:11:57.191 Namespace Attribute Notices: Supported 00:11:57.191 Firmware Activation Notices: Not Supported 00:11:57.191 ANA Change Notices: Not Supported 00:11:57.191 PLE Aggregate Log Change Notices: Not Supported 00:11:57.191 LBA Status Info Alert Notices: Not Supported 00:11:57.191 EGE Aggregate Log Change Notices: Not Supported 00:11:57.191 Normal NVM Subsystem Shutdown event: Not Supported 00:11:57.191 Zone Descriptor Change Notices: Not Supported 00:11:57.191 Discovery Log Change Notices: Not Supported 00:11:57.191 Controller Attributes 00:11:57.191 128-bit Host Identifier: Supported 00:11:57.191 Non-Operational Permissive Mode: Not Supported 00:11:57.191 NVM Sets: Not Supported 00:11:57.191 Read Recovery Levels: Not Supported 00:11:57.191 Endurance Groups: Not Supported 00:11:57.191 Predictable Latency Mode: Not Supported 00:11:57.191 Traffic Based Keep ALive: Not Supported 00:11:57.191 Namespace Granularity: Not Supported 00:11:57.191 SQ Associations: Not Supported 00:11:57.191 UUID List: Not Supported 00:11:57.191 Multi-Domain Subsystem: Not Supported 00:11:57.191 Fixed Capacity Management: Not Supported 00:11:57.191 Variable Capacity Management: Not Supported 00:11:57.191 Delete Endurance Group: Not Supported 00:11:57.191 Delete NVM Set: Not Supported 00:11:57.191 Extended LBA Formats Supported: Not Supported 00:11:57.191 Flexible Data Placement Supported: Not Supported 00:11:57.191 00:11:57.191 Controller Memory Buffer Support 00:11:57.191 ================================ 00:11:57.191 Supported: No 00:11:57.191 00:11:57.191 Persistent Memory Region Support 00:11:57.191 ================================ 00:11:57.191 Supported: No 00:11:57.191 00:11:57.191 Admin Command Set Attributes 00:11:57.191 ============================ 00:11:57.191 Security Send/Receive: Not Supported 00:11:57.191 Format NVM: Not Supported 00:11:57.191 Firmware Activate/Download: Not Supported 00:11:57.191 Namespace Management: Not Supported 00:11:57.191 Device Self-Test: Not Supported 00:11:57.191 Directives: Not Supported 00:11:57.191 NVMe-MI: Not Supported 00:11:57.191 Virtualization Management: Not Supported 00:11:57.191 Doorbell Buffer Config: Not Supported 00:11:57.191 Get LBA Status Capability: Not Supported 00:11:57.191 Command & Feature Lockdown Capability: Not Supported 00:11:57.191 Abort Command Limit: 4 00:11:57.191 Async Event Request Limit: 4 00:11:57.191 Number of Firmware Slots: N/A 00:11:57.191 Firmware Slot 1 Read-Only: N/A 00:11:57.191 Firmware Activation Without Reset: N/A 00:11:57.191 Multiple Update Detection Support: N/A 00:11:57.191 Firmware Update Granularity: No Information Provided 00:11:57.191 Per-Namespace SMART Log: No 00:11:57.191 Asymmetric Namespace Access Log Page: Not Supported 00:11:57.191 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:11:57.191 Command Effects Log Page: Supported 00:11:57.191 Get Log Page Extended Data: Supported 00:11:57.191 Telemetry Log Pages: Not Supported 00:11:57.191 Persistent Event Log Pages: Not Supported 00:11:57.191 Supported Log Pages Log Page: May Support 00:11:57.191 Commands Supported & Effects Log Page: Not Supported 00:11:57.191 Feature Identifiers & Effects Log Page:May Support 00:11:57.191 NVMe-MI Commands & Effects Log Page: May Support 00:11:57.191 Data Area 4 for Telemetry Log: Not Supported 00:11:57.191 Error Log Page Entries Supported: 128 00:11:57.191 Keep Alive: Supported 00:11:57.191 Keep Alive Granularity: 10000 ms 00:11:57.191 00:11:57.191 NVM Command Set Attributes 00:11:57.191 ========================== 00:11:57.191 Submission Queue Entry Size 00:11:57.191 Max: 64 00:11:57.191 Min: 64 00:11:57.191 Completion Queue Entry Size 00:11:57.191 Max: 16 00:11:57.191 Min: 16 00:11:57.191 Number of Namespaces: 32 00:11:57.191 Compare Command: Supported 00:11:57.191 Write Uncorrectable Command: Not Supported 00:11:57.191 Dataset Management Command: Supported 00:11:57.191 Write Zeroes Command: Supported 00:11:57.191 Set Features Save Field: Not Supported 00:11:57.191 Reservations: Not Supported 00:11:57.191 Timestamp: Not Supported 00:11:57.191 Copy: Supported 00:11:57.191 Volatile Write Cache: Present 00:11:57.191 Atomic Write Unit (Normal): 1 00:11:57.191 Atomic Write Unit (PFail): 1 00:11:57.191 Atomic Compare & Write Unit: 1 00:11:57.191 Fused Compare & Write: Supported 00:11:57.191 Scatter-Gather List 00:11:57.191 SGL Command Set: Supported (Dword aligned) 00:11:57.191 SGL Keyed: Not Supported 00:11:57.191 SGL Bit Bucket Descriptor: Not Supported 00:11:57.191 SGL Metadata Pointer: Not Supported 00:11:57.191 Oversized SGL: Not Supported 00:11:57.191 SGL Metadata Address: Not Supported 00:11:57.191 SGL Offset: Not Supported 00:11:57.191 Transport SGL Data Block: Not Supported 00:11:57.191 Replay Protected Memory Block: Not Supported 00:11:57.191 00:11:57.191 Firmware Slot Information 00:11:57.191 ========================= 00:11:57.191 Active slot: 1 00:11:57.191 Slot 1 Firmware Revision: 24.09 00:11:57.191 00:11:57.191 00:11:57.191 Commands Supported and Effects 00:11:57.191 ============================== 00:11:57.191 Admin Commands 00:11:57.191 -------------- 00:11:57.191 Get Log Page (02h): Supported 00:11:57.191 Identify (06h): Supported 00:11:57.191 Abort (08h): Supported 00:11:57.191 Set Features (09h): Supported 00:11:57.191 Get Features (0Ah): Supported 00:11:57.191 Asynchronous Event Request (0Ch): Supported 00:11:57.191 Keep Alive (18h): Supported 00:11:57.191 I/O Commands 00:11:57.191 ------------ 00:11:57.191 Flush (00h): Supported LBA-Change 00:11:57.191 Write (01h): Supported LBA-Change 00:11:57.191 Read (02h): Supported 00:11:57.191 Compare (05h): Supported 00:11:57.191 Write Zeroes (08h): Supported LBA-Change 00:11:57.191 Dataset Management (09h): Supported LBA-Change 00:11:57.192 Copy (19h): Supported LBA-Change 00:11:57.192 00:11:57.192 Error Log 00:11:57.192 ========= 00:11:57.192 00:11:57.192 Arbitration 00:11:57.192 =========== 00:11:57.192 Arbitration Burst: 1 00:11:57.192 00:11:57.192 Power Management 00:11:57.192 ================ 00:11:57.192 Number of Power States: 1 00:11:57.192 Current Power State: Power State #0 00:11:57.192 Power State #0: 00:11:57.192 Max Power: 0.00 W 00:11:57.192 Non-Operational State: Operational 00:11:57.192 Entry Latency: Not Reported 00:11:57.192 Exit Latency: Not Reported 00:11:57.192 Relative Read Throughput: 0 00:11:57.192 Relative Read Latency: 0 00:11:57.192 Relative Write Throughput: 0 00:11:57.192 Relative Write Latency: 0 00:11:57.192 Idle Power: Not Reported 00:11:57.192 Active Power: Not Reported 00:11:57.192 Non-Operational Permissive Mode: Not Supported 00:11:57.192 00:11:57.192 Health Information 00:11:57.192 ================== 00:11:57.192 Critical Warnings: 00:11:57.192 Available Spare Space: OK 00:11:57.192 Temperature: OK 00:11:57.192 Device Reliability: OK 00:11:57.192 Read Only: No 00:11:57.192 Volatile Memory Backup: OK 00:11:57.192 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:57.192 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:57.192 Available Spare: 0% 00:11:57.192 Available Sp[2024-07-15 18:26:42.511456] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:57.192 [2024-07-15 18:26:42.519342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:57.192 [2024-07-15 18:26:42.519372] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:11:57.192 [2024-07-15 18:26:42.519381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.192 [2024-07-15 18:26:42.519386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.192 [2024-07-15 18:26:42.519392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.192 [2024-07-15 18:26:42.519397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.192 [2024-07-15 18:26:42.519446] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:57.192 [2024-07-15 18:26:42.519456] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:11:57.192 [2024-07-15 18:26:42.520456] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:57.192 [2024-07-15 18:26:42.520496] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:11:57.192 [2024-07-15 18:26:42.520502] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:11:57.192 [2024-07-15 18:26:42.521456] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:11:57.192 [2024-07-15 18:26:42.521466] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:11:57.192 [2024-07-15 18:26:42.521511] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:11:57.192 [2024-07-15 18:26:42.524344] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:57.192 are Threshold: 0% 00:11:57.192 Life Percentage Used: 0% 00:11:57.192 Data Units Read: 0 00:11:57.192 Data Units Written: 0 00:11:57.192 Host Read Commands: 0 00:11:57.192 Host Write Commands: 0 00:11:57.192 Controller Busy Time: 0 minutes 00:11:57.192 Power Cycles: 0 00:11:57.192 Power On Hours: 0 hours 00:11:57.192 Unsafe Shutdowns: 0 00:11:57.192 Unrecoverable Media Errors: 0 00:11:57.192 Lifetime Error Log Entries: 0 00:11:57.192 Warning Temperature Time: 0 minutes 00:11:57.192 Critical Temperature Time: 0 minutes 00:11:57.192 00:11:57.192 Number of Queues 00:11:57.192 ================ 00:11:57.192 Number of I/O Submission Queues: 127 00:11:57.192 Number of I/O Completion Queues: 127 00:11:57.192 00:11:57.192 Active Namespaces 00:11:57.192 ================= 00:11:57.192 Namespace ID:1 00:11:57.192 Error Recovery Timeout: Unlimited 00:11:57.192 Command Set Identifier: NVM (00h) 00:11:57.192 Deallocate: Supported 00:11:57.192 Deallocated/Unwritten Error: Not Supported 00:11:57.192 Deallocated Read Value: Unknown 00:11:57.192 Deallocate in Write Zeroes: Not Supported 00:11:57.192 Deallocated Guard Field: 0xFFFF 00:11:57.192 Flush: Supported 00:11:57.192 Reservation: Supported 00:11:57.192 Namespace Sharing Capabilities: Multiple Controllers 00:11:57.192 Size (in LBAs): 131072 (0GiB) 00:11:57.192 Capacity (in LBAs): 131072 (0GiB) 00:11:57.192 Utilization (in LBAs): 131072 (0GiB) 00:11:57.192 NGUID: D9494AC534A242BAAAEF97D7AB4F459A 00:11:57.192 UUID: d9494ac5-34a2-42ba-aaef-97d7ab4f459a 00:11:57.192 Thin Provisioning: Not Supported 00:11:57.192 Per-NS Atomic Units: Yes 00:11:57.192 Atomic Boundary Size (Normal): 0 00:11:57.192 Atomic Boundary Size (PFail): 0 00:11:57.192 Atomic Boundary Offset: 0 00:11:57.192 Maximum Single Source Range Length: 65535 00:11:57.192 Maximum Copy Length: 65535 00:11:57.192 Maximum Source Range Count: 1 00:11:57.192 NGUID/EUI64 Never Reused: No 00:11:57.192 Namespace Write Protected: No 00:11:57.192 Number of LBA Formats: 1 00:11:57.192 Current LBA Format: LBA Format #00 00:11:57.192 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:57.192 00:11:57.192 18:26:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:57.192 EAL: No free 2048 kB hugepages reported on node 1 00:11:57.192 [2024-07-15 18:26:42.744560] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:02.456 Initializing NVMe Controllers 00:12:02.456 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:02.456 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:02.456 Initialization complete. Launching workers. 00:12:02.456 ======================================================== 00:12:02.456 Latency(us) 00:12:02.456 Device Information : IOPS MiB/s Average min max 00:12:02.456 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39928.05 155.97 3205.37 934.66 10652.84 00:12:02.456 ======================================================== 00:12:02.456 Total : 39928.05 155.97 3205.37 934.66 10652.84 00:12:02.456 00:12:02.456 [2024-07-15 18:26:47.851604] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:02.456 18:26:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:02.456 EAL: No free 2048 kB hugepages reported on node 1 00:12:02.714 [2024-07-15 18:26:48.070241] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:07.979 Initializing NVMe Controllers 00:12:07.979 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:07.979 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:07.979 Initialization complete. Launching workers. 00:12:07.979 ======================================================== 00:12:07.979 Latency(us) 00:12:07.979 Device Information : IOPS MiB/s Average min max 00:12:07.979 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39896.92 155.85 3208.09 932.75 6688.85 00:12:07.979 ======================================================== 00:12:07.979 Total : 39896.92 155.85 3208.09 932.75 6688.85 00:12:07.979 00:12:07.979 [2024-07-15 18:26:53.091628] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:07.979 18:26:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:07.979 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.979 [2024-07-15 18:26:53.274702] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:13.298 [2024-07-15 18:26:58.407433] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:13.298 Initializing NVMe Controllers 00:12:13.298 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:13.298 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:13.298 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:13.298 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:13.298 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:13.298 Initialization complete. Launching workers. 00:12:13.298 Starting thread on core 2 00:12:13.298 Starting thread on core 3 00:12:13.298 Starting thread on core 1 00:12:13.298 18:26:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:13.298 EAL: No free 2048 kB hugepages reported on node 1 00:12:13.298 [2024-07-15 18:26:58.685800] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:16.581 [2024-07-15 18:27:01.739178] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:16.581 Initializing NVMe Controllers 00:12:16.581 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:16.581 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:16.581 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:16.581 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:16.581 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:16.581 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:16.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:16.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:16.581 Initialization complete. Launching workers. 00:12:16.581 Starting thread on core 1 with urgent priority queue 00:12:16.581 Starting thread on core 2 with urgent priority queue 00:12:16.581 Starting thread on core 3 with urgent priority queue 00:12:16.581 Starting thread on core 0 with urgent priority queue 00:12:16.581 SPDK bdev Controller (SPDK2 ) core 0: 7805.67 IO/s 12.81 secs/100000 ios 00:12:16.581 SPDK bdev Controller (SPDK2 ) core 1: 6073.67 IO/s 16.46 secs/100000 ios 00:12:16.581 SPDK bdev Controller (SPDK2 ) core 2: 7178.67 IO/s 13.93 secs/100000 ios 00:12:16.581 SPDK bdev Controller (SPDK2 ) core 3: 6687.67 IO/s 14.95 secs/100000 ios 00:12:16.581 ======================================================== 00:12:16.581 00:12:16.581 18:27:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:16.581 EAL: No free 2048 kB hugepages reported on node 1 00:12:16.581 [2024-07-15 18:27:01.995800] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:16.581 Initializing NVMe Controllers 00:12:16.581 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:16.581 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:16.581 Namespace ID: 1 size: 0GB 00:12:16.581 Initialization complete. 00:12:16.581 INFO: using host memory buffer for IO 00:12:16.581 Hello world! 00:12:16.581 [2024-07-15 18:27:02.007876] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:16.581 18:27:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:16.581 EAL: No free 2048 kB hugepages reported on node 1 00:12:16.838 [2024-07-15 18:27:02.272713] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:18.214 Initializing NVMe Controllers 00:12:18.214 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:18.214 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:18.214 Initialization complete. Launching workers. 00:12:18.214 submit (in ns) avg, min, max = 5839.5, 3183.8, 4005404.8 00:12:18.214 complete (in ns) avg, min, max = 20615.3, 1815.2, 6303418.1 00:12:18.214 00:12:18.214 Submit histogram 00:12:18.214 ================ 00:12:18.214 Range in us Cumulative Count 00:12:18.214 3.170 - 3.185: 0.0060% ( 1) 00:12:18.214 3.185 - 3.200: 0.0363% ( 5) 00:12:18.214 3.200 - 3.215: 0.2359% ( 33) 00:12:18.214 3.215 - 3.230: 0.6653% ( 71) 00:12:18.214 3.230 - 3.246: 1.2218% ( 92) 00:12:18.214 3.246 - 3.261: 2.3892% ( 193) 00:12:18.214 3.261 - 3.276: 5.9154% ( 583) 00:12:18.214 3.276 - 3.291: 11.3470% ( 898) 00:12:18.214 3.291 - 3.307: 17.0870% ( 949) 00:12:18.214 3.307 - 3.322: 23.2747% ( 1023) 00:12:18.214 3.322 - 3.337: 29.8252% ( 1083) 00:12:18.214 3.337 - 3.352: 35.2205% ( 892) 00:12:18.214 3.352 - 3.368: 40.9605% ( 949) 00:12:18.214 3.368 - 3.383: 46.7973% ( 965) 00:12:18.214 3.383 - 3.398: 52.5494% ( 951) 00:12:18.214 3.398 - 3.413: 57.2673% ( 780) 00:12:18.214 3.413 - 3.429: 64.1505% ( 1138) 00:12:18.215 3.429 - 3.444: 70.9732% ( 1128) 00:12:18.215 3.444 - 3.459: 75.9814% ( 828) 00:12:18.215 3.459 - 3.474: 80.6992% ( 780) 00:12:18.215 3.474 - 3.490: 83.5964% ( 479) 00:12:18.215 3.490 - 3.505: 85.8344% ( 370) 00:12:18.215 3.505 - 3.520: 86.8384% ( 166) 00:12:18.215 3.520 - 3.535: 87.4614% ( 103) 00:12:18.215 3.535 - 3.550: 87.9453% ( 80) 00:12:18.215 3.550 - 3.566: 88.3929% ( 74) 00:12:18.215 3.566 - 3.581: 89.1187% ( 120) 00:12:18.215 3.581 - 3.596: 89.9897% ( 144) 00:12:18.215 3.596 - 3.611: 90.9696% ( 162) 00:12:18.215 3.611 - 3.627: 91.7982% ( 137) 00:12:18.215 3.627 - 3.642: 92.6813% ( 146) 00:12:18.215 3.642 - 3.657: 93.6309% ( 157) 00:12:18.215 3.657 - 3.672: 94.6047% ( 161) 00:12:18.215 3.672 - 3.688: 95.4455% ( 139) 00:12:18.215 3.688 - 3.703: 96.4011% ( 158) 00:12:18.215 3.703 - 3.718: 97.2419% ( 139) 00:12:18.215 3.718 - 3.733: 97.8951% ( 108) 00:12:18.215 3.733 - 3.749: 98.2822% ( 64) 00:12:18.215 3.749 - 3.764: 98.6512% ( 61) 00:12:18.215 3.764 - 3.779: 99.0443% ( 65) 00:12:18.215 3.779 - 3.794: 99.2137% ( 28) 00:12:18.215 3.794 - 3.810: 99.3468% ( 22) 00:12:18.215 3.810 - 3.825: 99.4617% ( 19) 00:12:18.215 3.825 - 3.840: 99.5222% ( 10) 00:12:18.215 3.840 - 3.855: 99.5282% ( 1) 00:12:18.215 3.855 - 3.870: 99.5645% ( 6) 00:12:18.215 3.870 - 3.886: 99.5766% ( 2) 00:12:18.215 3.886 - 3.901: 99.5887% ( 2) 00:12:18.215 3.901 - 3.931: 99.5947% ( 1) 00:12:18.215 3.931 - 3.962: 99.6008% ( 1) 00:12:18.215 4.023 - 4.053: 99.6068% ( 1) 00:12:18.215 5.242 - 5.272: 99.6129% ( 1) 00:12:18.215 5.333 - 5.364: 99.6189% ( 1) 00:12:18.215 5.364 - 5.394: 99.6310% ( 2) 00:12:18.215 5.425 - 5.455: 99.6371% ( 1) 00:12:18.215 5.638 - 5.669: 99.6431% ( 1) 00:12:18.215 5.669 - 5.699: 99.6492% ( 1) 00:12:18.215 5.730 - 5.760: 99.6613% ( 2) 00:12:18.215 5.821 - 5.851: 99.6673% ( 1) 00:12:18.215 5.851 - 5.882: 99.6734% ( 1) 00:12:18.215 5.943 - 5.973: 99.6794% ( 1) 00:12:18.215 6.004 - 6.034: 99.6855% ( 1) 00:12:18.215 6.248 - 6.278: 99.6915% ( 1) 00:12:18.215 6.278 - 6.309: 99.6976% ( 1) 00:12:18.215 6.339 - 6.370: 99.7157% ( 3) 00:12:18.215 6.370 - 6.400: 99.7218% ( 1) 00:12:18.215 6.400 - 6.430: 99.7278% ( 1) 00:12:18.215 6.491 - 6.522: 99.7339% ( 1) 00:12:18.215 6.644 - 6.674: 99.7399% ( 1) 00:12:18.215 6.949 - 6.979: 99.7460% ( 1) 00:12:18.215 6.979 - 7.010: 99.7520% ( 1) 00:12:18.215 7.040 - 7.070: 99.7641% ( 2) 00:12:18.215 7.101 - 7.131: 99.7702% ( 1) 00:12:18.215 7.131 - 7.162: 99.7762% ( 1) 00:12:18.215 7.162 - 7.192: 99.7883% ( 2) 00:12:18.215 7.375 - 7.406: 99.7944% ( 1) 00:12:18.215 7.436 - 7.467: 99.8004% ( 1) 00:12:18.215 7.558 - 7.589: 99.8064% ( 1) 00:12:18.215 7.589 - 7.619: 99.8125% ( 1) 00:12:18.215 [2024-07-15 18:27:03.369345] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:18.215 7.710 - 7.741: 99.8185% ( 1) 00:12:18.215 7.802 - 7.863: 99.8246% ( 1) 00:12:18.215 7.924 - 7.985: 99.8306% ( 1) 00:12:18.215 8.046 - 8.107: 99.8367% ( 1) 00:12:18.215 8.168 - 8.229: 99.8548% ( 3) 00:12:18.215 8.290 - 8.350: 99.8609% ( 1) 00:12:18.215 8.350 - 8.411: 99.8851% ( 4) 00:12:18.215 8.411 - 8.472: 99.8911% ( 1) 00:12:18.215 8.594 - 8.655: 99.9032% ( 2) 00:12:18.215 8.838 - 8.899: 99.9093% ( 1) 00:12:18.215 9.021 - 9.082: 99.9153% ( 1) 00:12:18.215 9.143 - 9.204: 99.9214% ( 1) 00:12:18.215 9.204 - 9.265: 99.9274% ( 1) 00:12:18.215 11.459 - 11.520: 99.9335% ( 1) 00:12:18.215 19.139 - 19.261: 99.9395% ( 1) 00:12:18.215 3994.575 - 4025.783: 100.0000% ( 10) 00:12:18.215 00:12:18.215 Complete histogram 00:12:18.215 ================== 00:12:18.215 Range in us Cumulative Count 00:12:18.215 1.813 - 1.821: 0.0847% ( 14) 00:12:18.215 1.821 - 1.829: 2.5646% ( 410) 00:12:18.215 1.829 - 1.836: 18.3935% ( 2617) 00:12:18.215 1.836 - 1.844: 51.7087% ( 5508) 00:12:18.215 1.844 - 1.851: 77.4754% ( 4260) 00:12:18.215 1.851 - 1.859: 87.4312% ( 1646) 00:12:18.215 1.859 - 1.867: 91.0724% ( 602) 00:12:18.215 1.867 - 1.874: 92.8386% ( 292) 00:12:18.215 1.874 - 1.882: 93.9031% ( 176) 00:12:18.215 1.882 - 1.890: 94.3809% ( 79) 00:12:18.215 1.890 - 1.897: 94.7680% ( 64) 00:12:18.215 1.897 - 1.905: 95.3850% ( 102) 00:12:18.215 1.905 - 1.912: 96.2439% ( 142) 00:12:18.215 1.912 - 1.920: 97.0967% ( 141) 00:12:18.215 1.920 - 1.928: 97.9798% ( 146) 00:12:18.215 1.928 - 1.935: 98.4879% ( 84) 00:12:18.215 1.935 - 1.943: 98.7056% ( 36) 00:12:18.215 1.943 - 1.950: 98.8205% ( 19) 00:12:18.215 1.950 - 1.966: 99.0322% ( 35) 00:12:18.215 1.966 - 1.981: 99.0746% ( 7) 00:12:18.215 1.981 - 1.996: 99.1411% ( 11) 00:12:18.215 1.996 - 2.011: 99.1593% ( 3) 00:12:18.215 2.011 - 2.027: 99.1895% ( 5) 00:12:18.215 2.027 - 2.042: 99.2137% ( 4) 00:12:18.215 2.042 - 2.057: 99.2439% ( 5) 00:12:18.215 2.072 - 2.088: 99.2863% ( 7) 00:12:18.215 2.088 - 2.103: 99.2984% ( 2) 00:12:18.215 2.270 - 2.286: 99.3044% ( 1) 00:12:18.215 3.657 - 3.672: 99.3105% ( 1) 00:12:18.215 3.672 - 3.688: 99.3165% ( 1) 00:12:18.215 3.779 - 3.794: 99.3226% ( 1) 00:12:18.215 3.931 - 3.962: 99.3286% ( 1) 00:12:18.215 3.962 - 3.992: 99.3347% ( 1) 00:12:18.215 3.992 - 4.023: 99.3407% ( 1) 00:12:18.215 4.023 - 4.053: 99.3468% ( 1) 00:12:18.215 4.450 - 4.480: 99.3528% ( 1) 00:12:18.215 4.480 - 4.510: 99.3589% ( 1) 00:12:18.215 4.541 - 4.571: 99.3649% ( 1) 00:12:18.215 4.571 - 4.602: 99.3710% ( 1) 00:12:18.215 4.602 - 4.632: 99.3770% ( 1) 00:12:18.215 4.663 - 4.693: 99.3891% ( 2) 00:12:18.215 4.754 - 4.785: 99.3951% ( 1) 00:12:18.215 5.029 - 5.059: 99.4012% ( 1) 00:12:18.215 5.059 - 5.090: 99.4072% ( 1) 00:12:18.215 5.364 - 5.394: 99.4133% ( 1) 00:12:18.215 5.394 - 5.425: 99.4193% ( 1) 00:12:18.215 5.486 - 5.516: 99.4314% ( 2) 00:12:18.215 5.547 - 5.577: 99.4375% ( 1) 00:12:18.215 5.638 - 5.669: 99.4435% ( 1) 00:12:18.215 5.699 - 5.730: 99.4496% ( 1) 00:12:18.215 5.821 - 5.851: 99.4556% ( 1) 00:12:18.215 5.912 - 5.943: 99.4617% ( 1) 00:12:18.215 6.248 - 6.278: 99.4798% ( 3) 00:12:18.215 6.278 - 6.309: 99.4859% ( 1) 00:12:18.215 6.491 - 6.522: 99.4919% ( 1) 00:12:18.215 6.766 - 6.796: 99.5040% ( 2) 00:12:18.215 6.857 - 6.888: 99.5101% ( 1) 00:12:18.215 6.918 - 6.949: 99.5161% ( 1) 00:12:18.215 6.979 - 7.010: 99.5222% ( 1) 00:12:18.215 7.650 - 7.680: 99.5282% ( 1) 00:12:18.215 10.057 - 10.118: 99.5343% ( 1) 00:12:18.215 1224.899 - 1232.701: 99.5403% ( 1) 00:12:18.215 3994.575 - 4025.783: 99.9819% ( 73) 00:12:18.215 4993.219 - 5024.427: 99.9879% ( 1) 00:12:18.215 5991.863 - 6023.070: 99.9940% ( 1) 00:12:18.215 6272.731 - 6303.939: 100.0000% ( 1) 00:12:18.215 00:12:18.215 18:27:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:18.215 18:27:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:18.215 18:27:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:18.215 18:27:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:18.215 18:27:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:18.215 [ 00:12:18.215 { 00:12:18.215 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:18.215 "subtype": "Discovery", 00:12:18.215 "listen_addresses": [], 00:12:18.215 "allow_any_host": true, 00:12:18.215 "hosts": [] 00:12:18.215 }, 00:12:18.215 { 00:12:18.215 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:18.215 "subtype": "NVMe", 00:12:18.215 "listen_addresses": [ 00:12:18.215 { 00:12:18.215 "trtype": "VFIOUSER", 00:12:18.215 "adrfam": "IPv4", 00:12:18.215 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:18.215 "trsvcid": "0" 00:12:18.215 } 00:12:18.215 ], 00:12:18.215 "allow_any_host": true, 00:12:18.215 "hosts": [], 00:12:18.215 "serial_number": "SPDK1", 00:12:18.215 "model_number": "SPDK bdev Controller", 00:12:18.215 "max_namespaces": 32, 00:12:18.215 "min_cntlid": 1, 00:12:18.215 "max_cntlid": 65519, 00:12:18.215 "namespaces": [ 00:12:18.215 { 00:12:18.215 "nsid": 1, 00:12:18.215 "bdev_name": "Malloc1", 00:12:18.215 "name": "Malloc1", 00:12:18.215 "nguid": "E2DD58CEBA894C649F65B3EDAEFBCEC6", 00:12:18.215 "uuid": "e2dd58ce-ba89-4c64-9f65-b3edaefbcec6" 00:12:18.215 }, 00:12:18.215 { 00:12:18.215 "nsid": 2, 00:12:18.215 "bdev_name": "Malloc3", 00:12:18.215 "name": "Malloc3", 00:12:18.215 "nguid": "FF57EAE599894C09A1251B89A5F80030", 00:12:18.215 "uuid": "ff57eae5-9989-4c09-a125-1b89a5f80030" 00:12:18.215 } 00:12:18.215 ] 00:12:18.215 }, 00:12:18.215 { 00:12:18.215 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:18.215 "subtype": "NVMe", 00:12:18.215 "listen_addresses": [ 00:12:18.215 { 00:12:18.215 "trtype": "VFIOUSER", 00:12:18.215 "adrfam": "IPv4", 00:12:18.215 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:18.215 "trsvcid": "0" 00:12:18.215 } 00:12:18.216 ], 00:12:18.216 "allow_any_host": true, 00:12:18.216 "hosts": [], 00:12:18.216 "serial_number": "SPDK2", 00:12:18.216 "model_number": "SPDK bdev Controller", 00:12:18.216 "max_namespaces": 32, 00:12:18.216 "min_cntlid": 1, 00:12:18.216 "max_cntlid": 65519, 00:12:18.216 "namespaces": [ 00:12:18.216 { 00:12:18.216 "nsid": 1, 00:12:18.216 "bdev_name": "Malloc2", 00:12:18.216 "name": "Malloc2", 00:12:18.216 "nguid": "D9494AC534A242BAAAEF97D7AB4F459A", 00:12:18.216 "uuid": "d9494ac5-34a2-42ba-aaef-97d7ab4f459a" 00:12:18.216 } 00:12:18.216 ] 00:12:18.216 } 00:12:18.216 ] 00:12:18.216 18:27:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:18.216 18:27:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3845424 00:12:18.216 18:27:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:18.216 18:27:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:18.216 18:27:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:18.216 18:27:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:18.216 18:27:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:18.216 18:27:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:18.216 18:27:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:18.216 18:27:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:18.216 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.216 [2024-07-15 18:27:03.730542] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:18.474 Malloc4 00:12:18.474 18:27:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:18.474 [2024-07-15 18:27:03.975295] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:18.474 18:27:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:18.474 Asynchronous Event Request test 00:12:18.474 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:18.474 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:18.474 Registering asynchronous event callbacks... 00:12:18.474 Starting namespace attribute notice tests for all controllers... 00:12:18.474 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:18.474 aer_cb - Changed Namespace 00:12:18.474 Cleaning up... 00:12:18.733 [ 00:12:18.733 { 00:12:18.733 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:18.733 "subtype": "Discovery", 00:12:18.733 "listen_addresses": [], 00:12:18.733 "allow_any_host": true, 00:12:18.733 "hosts": [] 00:12:18.733 }, 00:12:18.733 { 00:12:18.733 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:18.733 "subtype": "NVMe", 00:12:18.733 "listen_addresses": [ 00:12:18.733 { 00:12:18.733 "trtype": "VFIOUSER", 00:12:18.733 "adrfam": "IPv4", 00:12:18.733 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:18.733 "trsvcid": "0" 00:12:18.733 } 00:12:18.733 ], 00:12:18.733 "allow_any_host": true, 00:12:18.733 "hosts": [], 00:12:18.733 "serial_number": "SPDK1", 00:12:18.733 "model_number": "SPDK bdev Controller", 00:12:18.733 "max_namespaces": 32, 00:12:18.733 "min_cntlid": 1, 00:12:18.733 "max_cntlid": 65519, 00:12:18.733 "namespaces": [ 00:12:18.733 { 00:12:18.733 "nsid": 1, 00:12:18.733 "bdev_name": "Malloc1", 00:12:18.733 "name": "Malloc1", 00:12:18.733 "nguid": "E2DD58CEBA894C649F65B3EDAEFBCEC6", 00:12:18.733 "uuid": "e2dd58ce-ba89-4c64-9f65-b3edaefbcec6" 00:12:18.733 }, 00:12:18.733 { 00:12:18.733 "nsid": 2, 00:12:18.733 "bdev_name": "Malloc3", 00:12:18.733 "name": "Malloc3", 00:12:18.733 "nguid": "FF57EAE599894C09A1251B89A5F80030", 00:12:18.733 "uuid": "ff57eae5-9989-4c09-a125-1b89a5f80030" 00:12:18.733 } 00:12:18.733 ] 00:12:18.733 }, 00:12:18.733 { 00:12:18.733 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:18.733 "subtype": "NVMe", 00:12:18.733 "listen_addresses": [ 00:12:18.733 { 00:12:18.733 "trtype": "VFIOUSER", 00:12:18.733 "adrfam": "IPv4", 00:12:18.733 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:18.733 "trsvcid": "0" 00:12:18.733 } 00:12:18.733 ], 00:12:18.733 "allow_any_host": true, 00:12:18.733 "hosts": [], 00:12:18.733 "serial_number": "SPDK2", 00:12:18.733 "model_number": "SPDK bdev Controller", 00:12:18.733 "max_namespaces": 32, 00:12:18.733 "min_cntlid": 1, 00:12:18.733 "max_cntlid": 65519, 00:12:18.733 "namespaces": [ 00:12:18.733 { 00:12:18.733 "nsid": 1, 00:12:18.733 "bdev_name": "Malloc2", 00:12:18.733 "name": "Malloc2", 00:12:18.733 "nguid": "D9494AC534A242BAAAEF97D7AB4F459A", 00:12:18.733 "uuid": "d9494ac5-34a2-42ba-aaef-97d7ab4f459a" 00:12:18.733 }, 00:12:18.733 { 00:12:18.733 "nsid": 2, 00:12:18.733 "bdev_name": "Malloc4", 00:12:18.733 "name": "Malloc4", 00:12:18.733 "nguid": "D30245D830004192949C476F9D3EB4A9", 00:12:18.733 "uuid": "d30245d8-3000-4192-949c-476f9d3eb4a9" 00:12:18.733 } 00:12:18.733 ] 00:12:18.733 } 00:12:18.733 ] 00:12:18.733 18:27:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3845424 00:12:18.733 18:27:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:18.733 18:27:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3837626 00:12:18.733 18:27:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3837626 ']' 00:12:18.733 18:27:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3837626 00:12:18.733 18:27:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:18.733 18:27:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:18.733 18:27:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3837626 00:12:18.733 18:27:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:18.733 18:27:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:18.733 18:27:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3837626' 00:12:18.733 killing process with pid 3837626 00:12:18.733 18:27:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3837626 00:12:18.733 18:27:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3837626 00:12:18.991 18:27:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:18.991 18:27:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:18.991 18:27:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:18.991 18:27:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:18.991 18:27:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:18.991 18:27:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3845609 00:12:18.992 18:27:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3845609' 00:12:18.992 Process pid: 3845609 00:12:18.992 18:27:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:18.992 18:27:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:18.992 18:27:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3845609 00:12:18.992 18:27:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3845609 ']' 00:12:18.992 18:27:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.992 18:27:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:18.992 18:27:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.992 18:27:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:18.992 18:27:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:18.992 [2024-07-15 18:27:04.529819] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:18.992 [2024-07-15 18:27:04.530631] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:12:18.992 [2024-07-15 18:27:04.530667] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.250 EAL: No free 2048 kB hugepages reported on node 1 00:12:19.250 [2024-07-15 18:27:04.593699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:19.250 [2024-07-15 18:27:04.673009] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:19.250 [2024-07-15 18:27:04.673047] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:19.250 [2024-07-15 18:27:04.673054] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:19.250 [2024-07-15 18:27:04.673059] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:19.250 [2024-07-15 18:27:04.673064] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:19.250 [2024-07-15 18:27:04.673111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.250 [2024-07-15 18:27:04.673219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:19.250 [2024-07-15 18:27:04.673325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.250 [2024-07-15 18:27:04.673326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:19.250 [2024-07-15 18:27:04.762299] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:19.250 [2024-07-15 18:27:04.762392] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:19.250 [2024-07-15 18:27:04.764094] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:19.250 [2024-07-15 18:27:04.764161] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:19.250 [2024-07-15 18:27:04.764169] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:19.817 18:27:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:19.817 18:27:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:19.817 18:27:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:21.192 18:27:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:21.192 18:27:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:21.192 18:27:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:21.192 18:27:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:21.192 18:27:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:21.192 18:27:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:21.192 Malloc1 00:12:21.192 18:27:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:21.450 18:27:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:21.708 18:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:21.967 18:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:21.967 18:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:21.967 18:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:21.967 Malloc2 00:12:21.967 18:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:22.226 18:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:22.484 18:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:22.743 18:27:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:22.743 18:27:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3845609 00:12:22.743 18:27:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3845609 ']' 00:12:22.743 18:27:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3845609 00:12:22.743 18:27:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:22.743 18:27:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:22.743 18:27:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3845609 00:12:22.743 18:27:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:22.743 18:27:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:22.743 18:27:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3845609' 00:12:22.743 killing process with pid 3845609 00:12:22.743 18:27:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3845609 00:12:22.743 18:27:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3845609 00:12:23.003 18:27:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:23.003 18:27:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:23.003 00:12:23.003 real 0m51.246s 00:12:23.003 user 3m22.720s 00:12:23.003 sys 0m3.533s 00:12:23.003 18:27:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:23.003 18:27:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:23.003 ************************************ 00:12:23.003 END TEST nvmf_vfio_user 00:12:23.003 ************************************ 00:12:23.003 18:27:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:23.003 18:27:08 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:23.003 18:27:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:23.003 18:27:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:23.003 18:27:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:23.003 ************************************ 00:12:23.003 START TEST nvmf_vfio_user_nvme_compliance 00:12:23.003 ************************************ 00:12:23.003 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:23.003 * Looking for test storage... 00:12:23.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:23.003 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:23.003 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:12:23.003 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:23.003 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3846500 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3846500' 00:12:23.004 Process pid: 3846500 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3846500 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 3846500 ']' 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:23.004 18:27:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:23.004 [2024-07-15 18:27:08.548984] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:12:23.004 [2024-07-15 18:27:08.549031] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.263 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.263 [2024-07-15 18:27:08.617249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:23.263 [2024-07-15 18:27:08.691966] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.263 [2024-07-15 18:27:08.692007] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.263 [2024-07-15 18:27:08.692013] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.263 [2024-07-15 18:27:08.692019] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.263 [2024-07-15 18:27:08.692024] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.263 [2024-07-15 18:27:08.692094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.263 [2024-07-15 18:27:08.693353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.263 [2024-07-15 18:27:08.693356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.829 18:27:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:23.829 18:27:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:12:23.829 18:27:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:12:25.207 18:27:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:25.207 18:27:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:25.207 18:27:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:25.207 18:27:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.207 18:27:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:25.207 18:27:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.207 18:27:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:25.207 18:27:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:25.207 18:27:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.207 18:27:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:25.207 malloc0 00:12:25.207 18:27:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.207 18:27:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:25.207 18:27:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.207 18:27:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:25.207 18:27:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.207 18:27:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:25.207 18:27:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.207 18:27:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:25.207 18:27:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.207 18:27:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:25.207 18:27:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.207 18:27:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:25.207 18:27:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.207 18:27:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:25.207 EAL: No free 2048 kB hugepages reported on node 1 00:12:25.207 00:12:25.207 00:12:25.207 CUnit - A unit testing framework for C - Version 2.1-3 00:12:25.207 http://cunit.sourceforge.net/ 00:12:25.207 00:12:25.207 00:12:25.207 Suite: nvme_compliance 00:12:25.207 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 18:27:10.594830] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:25.207 [2024-07-15 18:27:10.596145] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:25.207 [2024-07-15 18:27:10.596160] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:25.208 [2024-07-15 18:27:10.596166] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:25.208 [2024-07-15 18:27:10.597848] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:25.208 passed 00:12:25.208 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 18:27:10.677380] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:25.208 [2024-07-15 18:27:10.680408] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:25.208 passed 00:12:25.208 Test: admin_identify_ns ...[2024-07-15 18:27:10.758197] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:25.466 [2024-07-15 18:27:10.818352] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:25.466 [2024-07-15 18:27:10.826353] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:25.466 [2024-07-15 18:27:10.847428] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:25.466 passed 00:12:25.466 Test: admin_get_features_mandatory_features ...[2024-07-15 18:27:10.921144] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:25.466 [2024-07-15 18:27:10.924166] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:25.466 passed 00:12:25.466 Test: admin_get_features_optional_features ...[2024-07-15 18:27:10.999626] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:25.466 [2024-07-15 18:27:11.002645] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:25.724 passed 00:12:25.724 Test: admin_set_features_number_of_queues ...[2024-07-15 18:27:11.078511] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:25.724 [2024-07-15 18:27:11.187433] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:25.724 passed 00:12:25.724 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 18:27:11.261172] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:25.725 [2024-07-15 18:27:11.264190] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:25.983 passed 00:12:25.983 Test: admin_get_log_page_with_lpo ...[2024-07-15 18:27:11.342521] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:25.983 [2024-07-15 18:27:11.411350] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:25.983 [2024-07-15 18:27:11.424394] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:25.983 passed 00:12:25.983 Test: fabric_property_get ...[2024-07-15 18:27:11.498076] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:25.983 [2024-07-15 18:27:11.499300] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:25.983 [2024-07-15 18:27:11.501099] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:25.983 passed 00:12:26.242 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 18:27:11.577601] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:26.242 [2024-07-15 18:27:11.578828] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:26.242 [2024-07-15 18:27:11.580624] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:26.242 passed 00:12:26.242 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 18:27:11.656514] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:26.242 [2024-07-15 18:27:11.739001] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:26.242 [2024-07-15 18:27:11.751342] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:26.242 [2024-07-15 18:27:11.756435] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:26.242 passed 00:12:26.500 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 18:27:11.829207] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:26.500 [2024-07-15 18:27:11.830441] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:26.500 [2024-07-15 18:27:11.832235] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:26.500 passed 00:12:26.500 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 18:27:11.908973] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:26.500 [2024-07-15 18:27:11.985347] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:26.500 [2024-07-15 18:27:12.009344] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:26.500 [2024-07-15 18:27:12.014429] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:26.500 passed 00:12:26.765 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 18:27:12.087992] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:26.765 [2024-07-15 18:27:12.089213] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:26.765 [2024-07-15 18:27:12.089235] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:26.765 [2024-07-15 18:27:12.091010] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:26.765 passed 00:12:26.765 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 18:27:12.167646] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:26.765 [2024-07-15 18:27:12.260356] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:26.766 [2024-07-15 18:27:12.268347] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:26.766 [2024-07-15 18:27:12.276345] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:26.766 [2024-07-15 18:27:12.284346] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:26.766 [2024-07-15 18:27:12.313426] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:27.027 passed 00:12:27.027 Test: admin_create_io_sq_verify_pc ...[2024-07-15 18:27:12.387151] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:27.027 [2024-07-15 18:27:12.402353] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:27.027 [2024-07-15 18:27:12.420315] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:27.027 passed 00:12:27.027 Test: admin_create_io_qp_max_qps ...[2024-07-15 18:27:12.495806] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.403 [2024-07-15 18:27:13.592348] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:12:28.662 [2024-07-15 18:27:13.984591] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.662 passed 00:12:28.662 Test: admin_create_io_sq_shared_cq ...[2024-07-15 18:27:14.060504] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.662 [2024-07-15 18:27:14.196346] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:28.921 [2024-07-15 18:27:14.233406] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.921 passed 00:12:28.921 00:12:28.921 Run Summary: Type Total Ran Passed Failed Inactive 00:12:28.921 suites 1 1 n/a 0 0 00:12:28.921 tests 18 18 18 0 0 00:12:28.921 asserts 360 360 360 0 n/a 00:12:28.921 00:12:28.921 Elapsed time = 1.493 seconds 00:12:28.921 18:27:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3846500 00:12:28.921 18:27:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 3846500 ']' 00:12:28.921 18:27:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 3846500 00:12:28.921 18:27:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:12:28.921 18:27:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:28.921 18:27:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3846500 00:12:28.921 18:27:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:28.921 18:27:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:28.921 18:27:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3846500' 00:12:28.921 killing process with pid 3846500 00:12:28.921 18:27:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 3846500 00:12:28.921 18:27:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 3846500 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:29.180 00:12:29.180 real 0m6.140s 00:12:29.180 user 0m17.560s 00:12:29.180 sys 0m0.475s 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:29.180 ************************************ 00:12:29.180 END TEST nvmf_vfio_user_nvme_compliance 00:12:29.180 ************************************ 00:12:29.180 18:27:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:29.180 18:27:14 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:29.180 18:27:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:29.180 18:27:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:29.180 18:27:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:29.180 ************************************ 00:12:29.180 START TEST nvmf_vfio_user_fuzz 00:12:29.180 ************************************ 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:29.180 * Looking for test storage... 00:12:29.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:29.180 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:12:29.181 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:29.181 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:29.181 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:12:29.181 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3847773 00:12:29.181 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3847773' 00:12:29.181 Process pid: 3847773 00:12:29.181 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:29.181 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:29.181 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3847773 00:12:29.181 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 3847773 ']' 00:12:29.181 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.181 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:29.181 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.181 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:29.181 18:27:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:30.115 18:27:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:30.115 18:27:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:12:30.115 18:27:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:12:31.050 18:27:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:31.050 18:27:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.050 18:27:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:31.050 18:27:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.050 18:27:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:12:31.050 18:27:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:31.050 18:27:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.050 18:27:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:31.308 malloc0 00:12:31.308 18:27:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.308 18:27:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:12:31.308 18:27:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.308 18:27:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:31.308 18:27:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.308 18:27:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:31.308 18:27:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.308 18:27:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:31.308 18:27:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.308 18:27:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:31.308 18:27:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.308 18:27:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:31.308 18:27:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.308 18:27:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:12:31.308 18:27:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:03.454 Fuzzing completed. Shutting down the fuzz application 00:13:03.454 00:13:03.454 Dumping successful admin opcodes: 00:13:03.454 8, 9, 10, 24, 00:13:03.454 Dumping successful io opcodes: 00:13:03.454 0, 00:13:03.454 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1015202, total successful commands: 3980, random_seed: 3238254272 00:13:03.454 NS: 0x200003a1ef00 admin qp, Total commands completed: 249687, total successful commands: 2018, random_seed: 253532864 00:13:03.454 18:27:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:03.454 18:27:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.454 18:27:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:03.454 18:27:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.454 18:27:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3847773 00:13:03.454 18:27:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 3847773 ']' 00:13:03.454 18:27:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 3847773 00:13:03.454 18:27:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:13:03.454 18:27:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:03.454 18:27:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3847773 00:13:03.454 18:27:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:03.454 18:27:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:03.455 18:27:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3847773' 00:13:03.455 killing process with pid 3847773 00:13:03.455 18:27:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 3847773 00:13:03.455 18:27:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 3847773 00:13:03.455 18:27:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:03.455 18:27:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:03.455 00:13:03.455 real 0m32.783s 00:13:03.455 user 0m31.470s 00:13:03.455 sys 0m29.390s 00:13:03.455 18:27:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:03.455 18:27:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:03.455 ************************************ 00:13:03.455 END TEST nvmf_vfio_user_fuzz 00:13:03.455 ************************************ 00:13:03.455 18:27:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:03.455 18:27:47 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:03.455 18:27:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:03.455 18:27:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:03.455 18:27:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:03.455 ************************************ 00:13:03.455 START TEST nvmf_host_management 00:13:03.455 ************************************ 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:03.455 * Looking for test storage... 00:13:03.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:03.455 18:27:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:07.642 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:07.642 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:13:07.642 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:07.642 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:07.642 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:07.642 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:07.642 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:07.642 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:13:07.642 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:07.642 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:13:07.642 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:13:07.642 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:13:07.642 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:13:07.642 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:13:07.642 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:13:07.642 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:07.642 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:07.642 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:07.642 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:07.642 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:07.642 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:07.642 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:07.643 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:07.643 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:07.643 Found net devices under 0000:86:00.0: cvl_0_0 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:07.643 Found net devices under 0000:86:00.1: cvl_0_1 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:07.643 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:07.902 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:07.902 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:07.902 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:07.902 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:07.902 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:07.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:07.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:13:07.902 00:13:07.902 --- 10.0.0.2 ping statistics --- 00:13:07.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.902 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:13:07.902 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:07.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:07.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:13:07.902 00:13:07.902 --- 10.0.0.1 ping statistics --- 00:13:07.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.902 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:13:07.902 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:07.902 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:13:07.902 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:07.902 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:07.902 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:07.902 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:07.902 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:07.902 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:07.902 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:07.902 18:27:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:07.902 18:27:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:07.902 18:27:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:07.902 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:07.902 18:27:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:07.902 18:27:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:07.902 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3856265 00:13:07.902 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3856265 00:13:07.902 18:27:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:07.902 18:27:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3856265 ']' 00:13:07.902 18:27:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.902 18:27:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:07.902 18:27:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.902 18:27:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:07.902 18:27:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:07.902 [2024-07-15 18:27:53.397155] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:13:07.902 [2024-07-15 18:27:53.397197] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.902 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.159 [2024-07-15 18:27:53.464809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:08.159 [2024-07-15 18:27:53.543658] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:08.159 [2024-07-15 18:27:53.543693] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:08.159 [2024-07-15 18:27:53.543699] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:08.159 [2024-07-15 18:27:53.543705] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:08.159 [2024-07-15 18:27:53.543710] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:08.159 [2024-07-15 18:27:53.543818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:08.159 [2024-07-15 18:27:53.543928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:08.159 [2024-07-15 18:27:53.544034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.159 [2024-07-15 18:27:53.544036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:08.724 18:27:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:08.724 18:27:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:08.724 18:27:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:08.724 18:27:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:08.724 18:27:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:08.724 18:27:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:08.724 18:27:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:08.724 18:27:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.724 18:27:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:08.724 [2024-07-15 18:27:54.256142] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:08.724 18:27:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.724 18:27:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:08.724 18:27:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:08.724 18:27:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:08.724 18:27:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:08.724 18:27:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:08.724 18:27:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:08.724 18:27:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.724 18:27:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:08.983 Malloc0 00:13:08.983 [2024-07-15 18:27:54.315797] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.983 18:27:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.983 18:27:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:08.983 18:27:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:08.983 18:27:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:08.983 18:27:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3856529 00:13:08.983 18:27:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3856529 /var/tmp/bdevperf.sock 00:13:08.983 18:27:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3856529 ']' 00:13:08.983 18:27:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:08.983 18:27:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:08.983 18:27:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:08.983 18:27:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:08.983 18:27:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:08.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:08.983 18:27:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:08.983 18:27:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:08.983 18:27:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:08.983 18:27:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:08.983 18:27:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:08.983 18:27:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:08.983 { 00:13:08.983 "params": { 00:13:08.983 "name": "Nvme$subsystem", 00:13:08.983 "trtype": "$TEST_TRANSPORT", 00:13:08.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:08.983 "adrfam": "ipv4", 00:13:08.983 "trsvcid": "$NVMF_PORT", 00:13:08.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:08.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:08.983 "hdgst": ${hdgst:-false}, 00:13:08.983 "ddgst": ${ddgst:-false} 00:13:08.983 }, 00:13:08.983 "method": "bdev_nvme_attach_controller" 00:13:08.983 } 00:13:08.983 EOF 00:13:08.983 )") 00:13:08.983 18:27:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:08.983 18:27:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:08.983 18:27:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:08.983 18:27:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:08.983 "params": { 00:13:08.983 "name": "Nvme0", 00:13:08.983 "trtype": "tcp", 00:13:08.983 "traddr": "10.0.0.2", 00:13:08.983 "adrfam": "ipv4", 00:13:08.983 "trsvcid": "4420", 00:13:08.983 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:08.983 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:08.983 "hdgst": false, 00:13:08.983 "ddgst": false 00:13:08.983 }, 00:13:08.983 "method": "bdev_nvme_attach_controller" 00:13:08.983 }' 00:13:08.983 [2024-07-15 18:27:54.409031] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:13:08.983 [2024-07-15 18:27:54.409073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3856529 ] 00:13:08.983 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.983 [2024-07-15 18:27:54.474084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.241 [2024-07-15 18:27:54.546788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.498 Running I/O for 10 seconds... 00:13:09.759 18:27:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:09.759 18:27:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:09.759 18:27:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:09.759 18:27:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.759 18:27:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:09.759 18:27:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.759 18:27:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:09.759 18:27:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:09.759 18:27:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:09.759 18:27:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:09.759 18:27:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:09.759 18:27:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:09.759 18:27:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:09.759 18:27:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:09.759 18:27:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:09.759 18:27:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:09.759 18:27:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.759 18:27:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:09.759 18:27:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.759 18:27:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:13:09.759 18:27:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:13:09.759 18:27:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:09.759 18:27:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:09.759 18:27:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:09.759 18:27:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:09.759 18:27:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.759 18:27:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:09.759 [2024-07-15 18:27:55.291681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.759 [2024-07-15 18:27:55.291728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.759 [2024-07-15 18:27:55.291736] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.759 [2024-07-15 18:27:55.291742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.759 [2024-07-15 18:27:55.291749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.759 [2024-07-15 18:27:55.291765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.759 [2024-07-15 18:27:55.291772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291789] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291829] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291870] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291875] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.291997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.292003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.292009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.292016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.292022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.292028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.292034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.292039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.292045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.292051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.292057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.292063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.292069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.292074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.292080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.292086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.292092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.292098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.292104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5460 is same with the state(5) to be set 00:13:09.760 [2024-07-15 18:27:55.292168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.760 [2024-07-15 18:27:55.292200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.760 [2024-07-15 18:27:55.292216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.760 [2024-07-15 18:27:55.292224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.760 [2024-07-15 18:27:55.292233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.760 [2024-07-15 18:27:55.292240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.760 [2024-07-15 18:27:55.292253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.760 [2024-07-15 18:27:55.292260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.760 [2024-07-15 18:27:55.292269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.760 [2024-07-15 18:27:55.292276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.760 [2024-07-15 18:27:55.292284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.760 [2024-07-15 18:27:55.292291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.760 [2024-07-15 18:27:55.292300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.760 [2024-07-15 18:27:55.292307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.760 [2024-07-15 18:27:55.292315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.760 [2024-07-15 18:27:55.292322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.760 [2024-07-15 18:27:55.292330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.760 [2024-07-15 18:27:55.292342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.760 [2024-07-15 18:27:55.292351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.760 [2024-07-15 18:27:55.292357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.760 [2024-07-15 18:27:55.292365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.760 [2024-07-15 18:27:55.292372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.760 [2024-07-15 18:27:55.292380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.760 [2024-07-15 18:27:55.292387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.760 [2024-07-15 18:27:55.292394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.760 [2024-07-15 18:27:55.292401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.760 [2024-07-15 18:27:55.292409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.760 [2024-07-15 18:27:55.292417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.760 [2024-07-15 18:27:55.292426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.760 [2024-07-15 18:27:55.292432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.760 [2024-07-15 18:27:55.292441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.760 [2024-07-15 18:27:55.292450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.760 [2024-07-15 18:27:55.292460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.292989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.292995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.293004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.293011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.293020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.293027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.293036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.293044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.293053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.293060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.293068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.293075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.293083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.293090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.293100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.293107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.761 [2024-07-15 18:27:55.293115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.761 [2024-07-15 18:27:55.293122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.762 [2024-07-15 18:27:55.293130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.762 [2024-07-15 18:27:55.293137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.762 [2024-07-15 18:27:55.293147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.762 [2024-07-15 18:27:55.293154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.762 [2024-07-15 18:27:55.293162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.762 [2024-07-15 18:27:55.293169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.762 [2024-07-15 18:27:55.293178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.762 [2024-07-15 18:27:55.293185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.762 [2024-07-15 18:27:55.293193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:09.762 [2024-07-15 18:27:55.293200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.762 [2024-07-15 18:27:55.293208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3b20 is same with the state(5) to be set 00:13:09.762 [2024-07-15 18:27:55.293259] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xaf3b20 was disconnected and freed. reset controller. 00:13:09.762 [2024-07-15 18:27:55.294169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:09.762 task offset: 106496 on job bdev=Nvme0n1 fails 00:13:09.762 00:13:09.762 Latency(us) 00:13:09.762 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:09.762 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:09.762 Job: Nvme0n1 ended in about 0.43 seconds with error 00:13:09.762 Verification LBA range: start 0x0 length 0x400 00:13:09.762 Nvme0n1 : 0.43 1925.85 120.37 148.14 0.00 30078.48 3635.69 26713.72 00:13:09.762 =================================================================================================================== 00:13:09.762 Total : 1925.85 120.37 148.14 0.00 30078.48 3635.69 26713.72 00:13:09.762 18:27:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.762 [2024-07-15 18:27:55.295742] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:09.762 [2024-07-15 18:27:55.295760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e2980 (9): Bad file descriptor 00:13:09.762 18:27:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:09.762 18:27:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.762 18:27:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:09.762 [2024-07-15 18:27:55.298626] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:13:09.762 [2024-07-15 18:27:55.298708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:09.762 [2024-07-15 18:27:55.298732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.762 [2024-07-15 18:27:55.298745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:13:09.762 [2024-07-15 18:27:55.298753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:13:09.762 [2024-07-15 18:27:55.298761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:13:09.762 [2024-07-15 18:27:55.298768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6e2980 00:13:09.762 [2024-07-15 18:27:55.298787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e2980 (9): Bad file descriptor 00:13:09.762 [2024-07-15 18:27:55.298799] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:13:09.762 [2024-07-15 18:27:55.298806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:13:09.762 [2024-07-15 18:27:55.298814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:13:09.762 [2024-07-15 18:27:55.298827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:13:09.762 18:27:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.762 18:27:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:11.135 18:27:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3856529 00:13:11.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3856529) - No such process 00:13:11.135 18:27:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:13:11.135 18:27:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:11.135 18:27:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:11.135 18:27:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:11.135 18:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:11.135 18:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:11.135 18:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:11.135 18:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:11.135 { 00:13:11.135 "params": { 00:13:11.135 "name": "Nvme$subsystem", 00:13:11.135 "trtype": "$TEST_TRANSPORT", 00:13:11.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:11.135 "adrfam": "ipv4", 00:13:11.135 "trsvcid": "$NVMF_PORT", 00:13:11.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:11.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:11.135 "hdgst": ${hdgst:-false}, 00:13:11.135 "ddgst": ${ddgst:-false} 00:13:11.135 }, 00:13:11.135 "method": "bdev_nvme_attach_controller" 00:13:11.135 } 00:13:11.135 EOF 00:13:11.135 )") 00:13:11.135 18:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:11.135 18:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:11.135 18:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:11.135 18:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:11.135 "params": { 00:13:11.135 "name": "Nvme0", 00:13:11.135 "trtype": "tcp", 00:13:11.135 "traddr": "10.0.0.2", 00:13:11.135 "adrfam": "ipv4", 00:13:11.135 "trsvcid": "4420", 00:13:11.135 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:11.135 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:11.135 "hdgst": false, 00:13:11.135 "ddgst": false 00:13:11.135 }, 00:13:11.135 "method": "bdev_nvme_attach_controller" 00:13:11.135 }' 00:13:11.135 [2024-07-15 18:27:56.356265] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:13:11.135 [2024-07-15 18:27:56.356312] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3856785 ] 00:13:11.135 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.135 [2024-07-15 18:27:56.423711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.135 [2024-07-15 18:27:56.493141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.135 Running I/O for 1 seconds... 00:13:12.508 00:13:12.508 Latency(us) 00:13:12.508 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.508 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:12.508 Verification LBA range: start 0x0 length 0x400 00:13:12.508 Nvme0n1 : 1.01 2054.52 128.41 0.00 0.00 30549.76 2496.61 26713.72 00:13:12.508 =================================================================================================================== 00:13:12.508 Total : 2054.52 128.41 0.00 0.00 30549.76 2496.61 26713.72 00:13:12.508 18:27:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:12.508 18:27:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:12.508 18:27:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:12.508 18:27:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:12.508 18:27:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:12.508 18:27:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:12.508 18:27:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:13:12.508 18:27:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:12.508 18:27:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:13:12.508 18:27:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:12.508 18:27:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:12.508 rmmod nvme_tcp 00:13:12.508 rmmod nvme_fabrics 00:13:12.508 rmmod nvme_keyring 00:13:12.508 18:27:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:12.508 18:27:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:13:12.508 18:27:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:13:12.508 18:27:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3856265 ']' 00:13:12.508 18:27:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3856265 00:13:12.508 18:27:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 3856265 ']' 00:13:12.508 18:27:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 3856265 00:13:12.508 18:27:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:13:12.508 18:27:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:12.508 18:27:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3856265 00:13:12.508 18:27:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:12.508 18:27:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:12.508 18:27:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3856265' 00:13:12.508 killing process with pid 3856265 00:13:12.508 18:27:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 3856265 00:13:12.508 18:27:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 3856265 00:13:12.767 [2024-07-15 18:27:58.139533] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:12.767 18:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:12.767 18:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:12.767 18:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:12.767 18:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:12.767 18:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:12.767 18:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.767 18:27:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:12.767 18:27:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.676 18:28:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:14.676 18:28:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:14.676 00:13:14.676 real 0m12.795s 00:13:14.676 user 0m22.543s 00:13:14.676 sys 0m5.391s 00:13:14.676 18:28:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:14.676 18:28:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:14.676 ************************************ 00:13:14.676 END TEST nvmf_host_management 00:13:14.676 ************************************ 00:13:14.935 18:28:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:14.935 18:28:00 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:14.935 18:28:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:14.935 18:28:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:14.935 18:28:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:14.935 ************************************ 00:13:14.935 START TEST nvmf_lvol 00:13:14.935 ************************************ 00:13:14.935 18:28:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:14.935 * Looking for test storage... 00:13:14.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:14.935 18:28:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:14.935 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:14.935 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.935 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.935 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.935 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.935 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.935 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.935 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.935 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.935 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.935 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.935 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:14.935 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:14.935 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.935 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.935 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:14.935 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:14.935 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:14.935 18:28:00 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.935 18:28:00 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.935 18:28:00 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.935 18:28:00 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.935 18:28:00 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.935 18:28:00 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.935 18:28:00 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:14.936 18:28:00 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.936 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:13:14.936 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:14.936 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:14.936 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:14.936 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.936 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.936 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:14.936 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:14.936 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:14.936 18:28:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:14.936 18:28:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:14.936 18:28:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:14.936 18:28:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:14.936 18:28:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:14.936 18:28:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:14.936 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:14.936 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:14.936 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:14.936 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:14.936 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:14.936 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.936 18:28:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:14.936 18:28:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.936 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:14.936 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:14.936 18:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:13:14.936 18:28:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:21.503 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:21.503 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:13:21.503 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:21.503 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:21.503 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:21.503 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:21.503 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:21.503 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:13:21.503 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:21.503 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:13:21.503 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:13:21.503 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:13:21.503 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:13:21.503 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:13:21.503 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:13:21.503 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:21.503 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:21.503 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:21.503 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:21.503 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:21.503 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:21.503 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:21.503 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:21.503 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:21.503 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:21.503 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:21.503 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:21.503 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:21.503 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:21.503 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:21.503 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:21.504 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:21.504 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:21.504 Found net devices under 0000:86:00.0: cvl_0_0 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:21.504 Found net devices under 0000:86:00.1: cvl_0_1 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:21.504 18:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:21.504 18:28:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:21.504 18:28:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:21.504 18:28:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:21.504 18:28:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:21.504 18:28:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:21.504 18:28:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:21.504 18:28:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:21.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:21.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:13:21.504 00:13:21.504 --- 10.0.0.2 ping statistics --- 00:13:21.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.504 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:13:21.504 18:28:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:21.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:21.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:13:21.504 00:13:21.504 --- 10.0.0.1 ping statistics --- 00:13:21.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.504 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:13:21.504 18:28:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:21.504 18:28:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:13:21.504 18:28:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:21.504 18:28:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:21.504 18:28:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:21.504 18:28:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:21.504 18:28:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:21.504 18:28:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:21.504 18:28:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:21.504 18:28:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:21.504 18:28:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:21.504 18:28:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:21.504 18:28:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:21.504 18:28:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3860544 00:13:21.504 18:28:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3860544 00:13:21.504 18:28:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:21.504 18:28:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 3860544 ']' 00:13:21.504 18:28:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.504 18:28:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:21.504 18:28:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.504 18:28:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:21.504 18:28:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:21.504 [2024-07-15 18:28:06.259990] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:13:21.504 [2024-07-15 18:28:06.260032] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.504 EAL: No free 2048 kB hugepages reported on node 1 00:13:21.504 [2024-07-15 18:28:06.330028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:21.504 [2024-07-15 18:28:06.407234] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:21.504 [2024-07-15 18:28:06.407269] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:21.504 [2024-07-15 18:28:06.407276] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:21.504 [2024-07-15 18:28:06.407282] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:21.504 [2024-07-15 18:28:06.407287] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:21.504 [2024-07-15 18:28:06.407335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.504 [2024-07-15 18:28:06.407442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.504 [2024-07-15 18:28:06.407443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:21.504 18:28:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:21.504 18:28:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:13:21.504 18:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:21.504 18:28:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:21.504 18:28:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:21.761 18:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.761 18:28:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:21.761 [2024-07-15 18:28:07.242917] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:21.761 18:28:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:22.033 18:28:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:22.033 18:28:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:22.297 18:28:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:22.297 18:28:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:22.297 18:28:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:22.555 18:28:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b12da116-be8a-4da1-90d9-b5d8851c77da 00:13:22.555 18:28:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b12da116-be8a-4da1-90d9-b5d8851c77da lvol 20 00:13:22.812 18:28:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=aa439f4c-0bcf-412b-a178-808b68692b11 00:13:22.812 18:28:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:23.069 18:28:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 aa439f4c-0bcf-412b-a178-808b68692b11 00:13:23.070 18:28:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:23.328 [2024-07-15 18:28:08.737995] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.328 18:28:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:23.586 18:28:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3861036 00:13:23.586 18:28:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:23.586 18:28:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:23.586 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.518 18:28:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot aa439f4c-0bcf-412b-a178-808b68692b11 MY_SNAPSHOT 00:13:24.775 18:28:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=6188b3ad-c141-41fd-b11a-8b82602d7830 00:13:24.775 18:28:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize aa439f4c-0bcf-412b-a178-808b68692b11 30 00:13:25.033 18:28:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 6188b3ad-c141-41fd-b11a-8b82602d7830 MY_CLONE 00:13:25.290 18:28:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0cfc3659-31a6-4d2d-a510-910e94f63b7a 00:13:25.290 18:28:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 0cfc3659-31a6-4d2d-a510-910e94f63b7a 00:13:25.855 18:28:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3861036 00:13:33.982 Initializing NVMe Controllers 00:13:33.982 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:33.982 Controller IO queue size 128, less than required. 00:13:33.982 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:33.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:33.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:33.982 Initialization complete. Launching workers. 00:13:33.982 ======================================================== 00:13:33.982 Latency(us) 00:13:33.983 Device Information : IOPS MiB/s Average min max 00:13:33.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12746.80 49.79 10047.20 1320.80 59116.06 00:13:33.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12627.40 49.33 10138.79 3621.33 58123.54 00:13:33.983 ======================================================== 00:13:33.983 Total : 25374.20 99.12 10092.78 1320.80 59116.06 00:13:33.983 00:13:33.983 18:28:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:34.242 18:28:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete aa439f4c-0bcf-412b-a178-808b68692b11 00:13:34.242 18:28:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b12da116-be8a-4da1-90d9-b5d8851c77da 00:13:34.501 18:28:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:34.501 18:28:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:34.501 18:28:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:34.501 18:28:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:34.501 18:28:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:13:34.501 18:28:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:34.501 18:28:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:13:34.501 18:28:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:34.501 18:28:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:34.501 rmmod nvme_tcp 00:13:34.501 rmmod nvme_fabrics 00:13:34.501 rmmod nvme_keyring 00:13:34.501 18:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:34.501 18:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:13:34.501 18:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:13:34.501 18:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3860544 ']' 00:13:34.501 18:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3860544 00:13:34.501 18:28:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 3860544 ']' 00:13:34.501 18:28:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 3860544 00:13:34.501 18:28:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:13:34.501 18:28:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:34.501 18:28:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3860544 00:13:34.760 18:28:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:34.760 18:28:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:34.760 18:28:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3860544' 00:13:34.760 killing process with pid 3860544 00:13:34.760 18:28:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 3860544 00:13:34.760 18:28:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 3860544 00:13:34.760 18:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:34.760 18:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:34.760 18:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:34.760 18:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:34.760 18:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:34.760 18:28:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.760 18:28:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:34.760 18:28:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.374 18:28:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:37.374 00:13:37.374 real 0m22.054s 00:13:37.374 user 1m4.285s 00:13:37.374 sys 0m7.065s 00:13:37.374 18:28:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:37.374 18:28:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:37.374 ************************************ 00:13:37.374 END TEST nvmf_lvol 00:13:37.374 ************************************ 00:13:37.374 18:28:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:37.374 18:28:22 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:37.374 18:28:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:37.374 18:28:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:37.374 18:28:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:37.374 ************************************ 00:13:37.374 START TEST nvmf_lvs_grow 00:13:37.374 ************************************ 00:13:37.374 18:28:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:37.374 * Looking for test storage... 00:13:37.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:37.374 18:28:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:37.374 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:13:37.374 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:37.374 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:13:37.375 18:28:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:42.664 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:42.664 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:42.664 Found net devices under 0000:86:00.0: cvl_0_0 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:42.664 Found net devices under 0000:86:00.1: cvl_0_1 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:42.664 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:42.923 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:42.923 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:42.923 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:42.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:42.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:13:42.923 00:13:42.923 --- 10.0.0.2 ping statistics --- 00:13:42.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.923 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:13:42.923 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:42.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:42.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:13:42.923 00:13:42.923 --- 10.0.0.1 ping statistics --- 00:13:42.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.923 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:13:42.923 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:42.923 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:13:42.923 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:42.923 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:42.923 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:42.923 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:42.923 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:42.923 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:42.923 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:42.923 18:28:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:42.923 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:42.923 18:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:42.923 18:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:42.923 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3866397 00:13:42.923 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3866397 00:13:42.923 18:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:42.923 18:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 3866397 ']' 00:13:42.923 18:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.923 18:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:42.923 18:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.923 18:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:42.923 18:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:42.923 [2024-07-15 18:28:28.411712] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:13:42.923 [2024-07-15 18:28:28.411757] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.923 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.181 [2024-07-15 18:28:28.483604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.181 [2024-07-15 18:28:28.556610] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.181 [2024-07-15 18:28:28.556648] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.181 [2024-07-15 18:28:28.556654] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.181 [2024-07-15 18:28:28.556660] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.181 [2024-07-15 18:28:28.556665] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.181 [2024-07-15 18:28:28.556687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.745 18:28:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:43.745 18:28:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:13:43.745 18:28:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:43.745 18:28:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:43.745 18:28:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:43.745 18:28:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.745 18:28:29 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:44.002 [2024-07-15 18:28:29.407409] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.002 18:28:29 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:44.002 18:28:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:44.002 18:28:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:44.002 18:28:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:44.002 ************************************ 00:13:44.002 START TEST lvs_grow_clean 00:13:44.002 ************************************ 00:13:44.002 18:28:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:13:44.002 18:28:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:44.002 18:28:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:44.002 18:28:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:44.002 18:28:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:44.002 18:28:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:44.002 18:28:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:44.002 18:28:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:44.002 18:28:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:44.002 18:28:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:44.260 18:28:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:44.260 18:28:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:44.518 18:28:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=162d9e3e-b9dc-4acc-aaab-169b095ff5f9 00:13:44.518 18:28:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 162d9e3e-b9dc-4acc-aaab-169b095ff5f9 00:13:44.518 18:28:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:44.518 18:28:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:44.518 18:28:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:44.518 18:28:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 162d9e3e-b9dc-4acc-aaab-169b095ff5f9 lvol 150 00:13:44.776 18:28:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=1a8d890c-8aef-4bb7-a75d-f79e19d99dda 00:13:44.776 18:28:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:44.776 18:28:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:45.034 [2024-07-15 18:28:30.354180] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:45.034 [2024-07-15 18:28:30.354229] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:45.034 true 00:13:45.034 18:28:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 162d9e3e-b9dc-4acc-aaab-169b095ff5f9 00:13:45.034 18:28:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:45.034 18:28:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:45.034 18:28:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:45.292 18:28:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1a8d890c-8aef-4bb7-a75d-f79e19d99dda 00:13:45.550 18:28:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:45.550 [2024-07-15 18:28:31.024204] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.551 18:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:45.810 18:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3866899 00:13:45.810 18:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:45.810 18:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:45.810 18:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3866899 /var/tmp/bdevperf.sock 00:13:45.810 18:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 3866899 ']' 00:13:45.810 18:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:45.810 18:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:45.810 18:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:45.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:45.810 18:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:45.810 18:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:45.810 [2024-07-15 18:28:31.247899] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:13:45.810 [2024-07-15 18:28:31.247949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3866899 ] 00:13:45.810 EAL: No free 2048 kB hugepages reported on node 1 00:13:45.810 [2024-07-15 18:28:31.312750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.069 [2024-07-15 18:28:31.384581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.637 18:28:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:46.637 18:28:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:13:46.637 18:28:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:46.895 Nvme0n1 00:13:46.895 18:28:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:47.154 [ 00:13:47.154 { 00:13:47.154 "name": "Nvme0n1", 00:13:47.154 "aliases": [ 00:13:47.154 "1a8d890c-8aef-4bb7-a75d-f79e19d99dda" 00:13:47.154 ], 00:13:47.154 "product_name": "NVMe disk", 00:13:47.154 "block_size": 4096, 00:13:47.154 "num_blocks": 38912, 00:13:47.154 "uuid": "1a8d890c-8aef-4bb7-a75d-f79e19d99dda", 00:13:47.154 "assigned_rate_limits": { 00:13:47.154 "rw_ios_per_sec": 0, 00:13:47.154 "rw_mbytes_per_sec": 0, 00:13:47.154 "r_mbytes_per_sec": 0, 00:13:47.154 "w_mbytes_per_sec": 0 00:13:47.154 }, 00:13:47.154 "claimed": false, 00:13:47.154 "zoned": false, 00:13:47.154 "supported_io_types": { 00:13:47.154 "read": true, 00:13:47.154 "write": true, 00:13:47.154 "unmap": true, 00:13:47.154 "flush": true, 00:13:47.154 "reset": true, 00:13:47.154 "nvme_admin": true, 00:13:47.154 "nvme_io": true, 00:13:47.154 "nvme_io_md": false, 00:13:47.154 "write_zeroes": true, 00:13:47.154 "zcopy": false, 00:13:47.154 "get_zone_info": false, 00:13:47.154 "zone_management": false, 00:13:47.154 "zone_append": false, 00:13:47.154 "compare": true, 00:13:47.154 "compare_and_write": true, 00:13:47.154 "abort": true, 00:13:47.154 "seek_hole": false, 00:13:47.154 "seek_data": false, 00:13:47.154 "copy": true, 00:13:47.154 "nvme_iov_md": false 00:13:47.154 }, 00:13:47.154 "memory_domains": [ 00:13:47.155 { 00:13:47.155 "dma_device_id": "system", 00:13:47.155 "dma_device_type": 1 00:13:47.155 } 00:13:47.155 ], 00:13:47.155 "driver_specific": { 00:13:47.155 "nvme": [ 00:13:47.155 { 00:13:47.155 "trid": { 00:13:47.155 "trtype": "TCP", 00:13:47.155 "adrfam": "IPv4", 00:13:47.155 "traddr": "10.0.0.2", 00:13:47.155 "trsvcid": "4420", 00:13:47.155 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:47.155 }, 00:13:47.155 "ctrlr_data": { 00:13:47.155 "cntlid": 1, 00:13:47.155 "vendor_id": "0x8086", 00:13:47.155 "model_number": "SPDK bdev Controller", 00:13:47.155 "serial_number": "SPDK0", 00:13:47.155 "firmware_revision": "24.09", 00:13:47.155 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:47.155 "oacs": { 00:13:47.155 "security": 0, 00:13:47.155 "format": 0, 00:13:47.155 "firmware": 0, 00:13:47.155 "ns_manage": 0 00:13:47.155 }, 00:13:47.155 "multi_ctrlr": true, 00:13:47.155 "ana_reporting": false 00:13:47.155 }, 00:13:47.155 "vs": { 00:13:47.155 "nvme_version": "1.3" 00:13:47.155 }, 00:13:47.155 "ns_data": { 00:13:47.155 "id": 1, 00:13:47.155 "can_share": true 00:13:47.155 } 00:13:47.155 } 00:13:47.155 ], 00:13:47.155 "mp_policy": "active_passive" 00:13:47.155 } 00:13:47.155 } 00:13:47.155 ] 00:13:47.155 18:28:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3867138 00:13:47.155 18:28:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:47.155 18:28:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:47.155 Running I/O for 10 seconds... 00:13:48.090 Latency(us) 00:13:48.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.090 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:48.091 Nvme0n1 : 1.00 24006.00 93.77 0.00 0.00 0.00 0.00 0.00 00:13:48.091 =================================================================================================================== 00:13:48.091 Total : 24006.00 93.77 0.00 0.00 0.00 0.00 0.00 00:13:48.091 00:13:49.027 18:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 162d9e3e-b9dc-4acc-aaab-169b095ff5f9 00:13:49.027 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:49.027 Nvme0n1 : 2.00 24095.00 94.12 0.00 0.00 0.00 0.00 0.00 00:13:49.027 =================================================================================================================== 00:13:49.027 Total : 24095.00 94.12 0.00 0.00 0.00 0.00 0.00 00:13:49.027 00:13:49.286 true 00:13:49.286 18:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 162d9e3e-b9dc-4acc-aaab-169b095ff5f9 00:13:49.286 18:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:49.544 18:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:49.544 18:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:49.544 18:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3867138 00:13:50.111 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:50.111 Nvme0n1 : 3.00 24149.00 94.33 0.00 0.00 0.00 0.00 0.00 00:13:50.111 =================================================================================================================== 00:13:50.111 Total : 24149.00 94.33 0.00 0.00 0.00 0.00 0.00 00:13:50.111 00:13:51.049 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:51.049 Nvme0n1 : 4.00 24176.75 94.44 0.00 0.00 0.00 0.00 0.00 00:13:51.049 =================================================================================================================== 00:13:51.049 Total : 24176.75 94.44 0.00 0.00 0.00 0.00 0.00 00:13:51.049 00:13:52.426 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:52.426 Nvme0n1 : 5.00 24231.60 94.65 0.00 0.00 0.00 0.00 0.00 00:13:52.426 =================================================================================================================== 00:13:52.426 Total : 24231.60 94.65 0.00 0.00 0.00 0.00 0.00 00:13:52.426 00:13:53.362 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:53.362 Nvme0n1 : 6.00 24261.17 94.77 0.00 0.00 0.00 0.00 0.00 00:13:53.362 =================================================================================================================== 00:13:53.362 Total : 24261.17 94.77 0.00 0.00 0.00 0.00 0.00 00:13:53.362 00:13:54.297 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:54.297 Nvme0n1 : 7.00 24278.71 94.84 0.00 0.00 0.00 0.00 0.00 00:13:54.297 =================================================================================================================== 00:13:54.297 Total : 24278.71 94.84 0.00 0.00 0.00 0.00 0.00 00:13:54.297 00:13:55.233 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:55.233 Nvme0n1 : 8.00 24239.75 94.69 0.00 0.00 0.00 0.00 0.00 00:13:55.233 =================================================================================================================== 00:13:55.233 Total : 24239.75 94.69 0.00 0.00 0.00 0.00 0.00 00:13:55.233 00:13:56.169 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:56.169 Nvme0n1 : 9.00 24256.67 94.75 0.00 0.00 0.00 0.00 0.00 00:13:56.169 =================================================================================================================== 00:13:56.169 Total : 24256.67 94.75 0.00 0.00 0.00 0.00 0.00 00:13:56.169 00:13:57.105 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:57.105 Nvme0n1 : 10.00 24275.80 94.83 0.00 0.00 0.00 0.00 0.00 00:13:57.105 =================================================================================================================== 00:13:57.105 Total : 24275.80 94.83 0.00 0.00 0.00 0.00 0.00 00:13:57.105 00:13:57.105 00:13:57.105 Latency(us) 00:13:57.105 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.105 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:57.105 Nvme0n1 : 10.01 24275.71 94.83 0.00 0.00 5269.80 1451.15 10298.51 00:13:57.105 =================================================================================================================== 00:13:57.105 Total : 24275.71 94.83 0.00 0.00 5269.80 1451.15 10298.51 00:13:57.105 0 00:13:57.105 18:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3866899 00:13:57.105 18:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 3866899 ']' 00:13:57.105 18:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 3866899 00:13:57.105 18:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:13:57.105 18:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:57.105 18:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3866899 00:13:57.364 18:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:57.364 18:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:57.364 18:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3866899' 00:13:57.364 killing process with pid 3866899 00:13:57.364 18:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 3866899 00:13:57.364 Received shutdown signal, test time was about 10.000000 seconds 00:13:57.364 00:13:57.364 Latency(us) 00:13:57.364 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.364 =================================================================================================================== 00:13:57.364 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:57.364 18:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 3866899 00:13:57.364 18:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:57.623 18:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:57.886 18:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 162d9e3e-b9dc-4acc-aaab-169b095ff5f9 00:13:57.886 18:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:57.886 18:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:57.886 18:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:57.886 18:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:58.144 [2024-07-15 18:28:43.529861] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:58.144 18:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 162d9e3e-b9dc-4acc-aaab-169b095ff5f9 00:13:58.144 18:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:13:58.144 18:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 162d9e3e-b9dc-4acc-aaab-169b095ff5f9 00:13:58.144 18:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:58.144 18:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:58.144 18:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:58.144 18:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:58.144 18:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:58.144 18:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:58.144 18:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:58.144 18:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:58.144 18:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 162d9e3e-b9dc-4acc-aaab-169b095ff5f9 00:13:58.403 request: 00:13:58.403 { 00:13:58.403 "uuid": "162d9e3e-b9dc-4acc-aaab-169b095ff5f9", 00:13:58.403 "method": "bdev_lvol_get_lvstores", 00:13:58.403 "req_id": 1 00:13:58.403 } 00:13:58.403 Got JSON-RPC error response 00:13:58.403 response: 00:13:58.403 { 00:13:58.403 "code": -19, 00:13:58.403 "message": "No such device" 00:13:58.403 } 00:13:58.403 18:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:13:58.403 18:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:58.403 18:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:58.403 18:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:58.403 18:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:58.403 aio_bdev 00:13:58.403 18:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1a8d890c-8aef-4bb7-a75d-f79e19d99dda 00:13:58.403 18:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=1a8d890c-8aef-4bb7-a75d-f79e19d99dda 00:13:58.403 18:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:58.403 18:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:13:58.403 18:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:58.403 18:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:58.403 18:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:58.662 18:28:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1a8d890c-8aef-4bb7-a75d-f79e19d99dda -t 2000 00:13:58.662 [ 00:13:58.662 { 00:13:58.662 "name": "1a8d890c-8aef-4bb7-a75d-f79e19d99dda", 00:13:58.662 "aliases": [ 00:13:58.662 "lvs/lvol" 00:13:58.662 ], 00:13:58.662 "product_name": "Logical Volume", 00:13:58.662 "block_size": 4096, 00:13:58.662 "num_blocks": 38912, 00:13:58.662 "uuid": "1a8d890c-8aef-4bb7-a75d-f79e19d99dda", 00:13:58.662 "assigned_rate_limits": { 00:13:58.662 "rw_ios_per_sec": 0, 00:13:58.662 "rw_mbytes_per_sec": 0, 00:13:58.662 "r_mbytes_per_sec": 0, 00:13:58.662 "w_mbytes_per_sec": 0 00:13:58.662 }, 00:13:58.662 "claimed": false, 00:13:58.662 "zoned": false, 00:13:58.662 "supported_io_types": { 00:13:58.662 "read": true, 00:13:58.662 "write": true, 00:13:58.662 "unmap": true, 00:13:58.662 "flush": false, 00:13:58.662 "reset": true, 00:13:58.662 "nvme_admin": false, 00:13:58.662 "nvme_io": false, 00:13:58.662 "nvme_io_md": false, 00:13:58.662 "write_zeroes": true, 00:13:58.662 "zcopy": false, 00:13:58.662 "get_zone_info": false, 00:13:58.662 "zone_management": false, 00:13:58.662 "zone_append": false, 00:13:58.662 "compare": false, 00:13:58.662 "compare_and_write": false, 00:13:58.662 "abort": false, 00:13:58.662 "seek_hole": true, 00:13:58.662 "seek_data": true, 00:13:58.663 "copy": false, 00:13:58.663 "nvme_iov_md": false 00:13:58.663 }, 00:13:58.663 "driver_specific": { 00:13:58.663 "lvol": { 00:13:58.663 "lvol_store_uuid": "162d9e3e-b9dc-4acc-aaab-169b095ff5f9", 00:13:58.663 "base_bdev": "aio_bdev", 00:13:58.663 "thin_provision": false, 00:13:58.663 "num_allocated_clusters": 38, 00:13:58.663 "snapshot": false, 00:13:58.663 "clone": false, 00:13:58.663 "esnap_clone": false 00:13:58.663 } 00:13:58.663 } 00:13:58.663 } 00:13:58.663 ] 00:13:58.663 18:28:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:13:58.663 18:28:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 162d9e3e-b9dc-4acc-aaab-169b095ff5f9 00:13:58.663 18:28:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:58.921 18:28:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:58.921 18:28:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 162d9e3e-b9dc-4acc-aaab-169b095ff5f9 00:13:58.921 18:28:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:59.180 18:28:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:59.180 18:28:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1a8d890c-8aef-4bb7-a75d-f79e19d99dda 00:13:59.180 18:28:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 162d9e3e-b9dc-4acc-aaab-169b095ff5f9 00:13:59.439 18:28:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:59.698 18:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:59.698 00:13:59.698 real 0m15.646s 00:13:59.698 user 0m15.372s 00:13:59.698 sys 0m1.397s 00:13:59.698 18:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:59.698 18:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:59.698 ************************************ 00:13:59.698 END TEST lvs_grow_clean 00:13:59.698 ************************************ 00:13:59.698 18:28:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:13:59.698 18:28:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:59.698 18:28:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:59.698 18:28:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:59.698 18:28:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:59.698 ************************************ 00:13:59.698 START TEST lvs_grow_dirty 00:13:59.698 ************************************ 00:13:59.698 18:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:13:59.698 18:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:59.698 18:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:59.698 18:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:59.698 18:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:59.698 18:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:59.698 18:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:59.698 18:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:59.698 18:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:59.698 18:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:59.957 18:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:59.957 18:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:00.215 18:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=7100c10e-54bc-4bed-9aae-0d8625096405 00:14:00.215 18:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7100c10e-54bc-4bed-9aae-0d8625096405 00:14:00.215 18:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:00.215 18:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:00.215 18:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:00.215 18:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7100c10e-54bc-4bed-9aae-0d8625096405 lvol 150 00:14:00.474 18:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a8aea039-8cd5-425b-a7ef-7c8490c2f01e 00:14:00.474 18:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:00.474 18:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:00.732 [2024-07-15 18:28:46.059068] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:00.732 [2024-07-15 18:28:46.059118] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:00.732 true 00:14:00.732 18:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:00.732 18:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7100c10e-54bc-4bed-9aae-0d8625096405 00:14:00.732 18:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:00.732 18:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:00.991 18:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a8aea039-8cd5-425b-a7ef-7c8490c2f01e 00:14:01.250 18:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:01.250 [2024-07-15 18:28:46.729048] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:01.250 18:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:01.509 18:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3869496 00:14:01.509 18:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:01.509 18:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:01.509 18:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3869496 /var/tmp/bdevperf.sock 00:14:01.509 18:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3869496 ']' 00:14:01.509 18:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:01.509 18:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:01.509 18:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:01.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:01.509 18:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:01.509 18:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:01.509 [2024-07-15 18:28:46.946035] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:14:01.509 [2024-07-15 18:28:46.946083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3869496 ] 00:14:01.509 EAL: No free 2048 kB hugepages reported on node 1 00:14:01.509 [2024-07-15 18:28:47.010650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.766 [2024-07-15 18:28:47.090607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.331 18:28:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:02.331 18:28:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:02.331 18:28:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:02.588 Nvme0n1 00:14:02.588 18:28:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:02.854 [ 00:14:02.854 { 00:14:02.854 "name": "Nvme0n1", 00:14:02.854 "aliases": [ 00:14:02.854 "a8aea039-8cd5-425b-a7ef-7c8490c2f01e" 00:14:02.854 ], 00:14:02.854 "product_name": "NVMe disk", 00:14:02.854 "block_size": 4096, 00:14:02.854 "num_blocks": 38912, 00:14:02.854 "uuid": "a8aea039-8cd5-425b-a7ef-7c8490c2f01e", 00:14:02.854 "assigned_rate_limits": { 00:14:02.854 "rw_ios_per_sec": 0, 00:14:02.854 "rw_mbytes_per_sec": 0, 00:14:02.854 "r_mbytes_per_sec": 0, 00:14:02.854 "w_mbytes_per_sec": 0 00:14:02.854 }, 00:14:02.854 "claimed": false, 00:14:02.854 "zoned": false, 00:14:02.854 "supported_io_types": { 00:14:02.854 "read": true, 00:14:02.854 "write": true, 00:14:02.854 "unmap": true, 00:14:02.854 "flush": true, 00:14:02.854 "reset": true, 00:14:02.854 "nvme_admin": true, 00:14:02.854 "nvme_io": true, 00:14:02.854 "nvme_io_md": false, 00:14:02.854 "write_zeroes": true, 00:14:02.854 "zcopy": false, 00:14:02.854 "get_zone_info": false, 00:14:02.854 "zone_management": false, 00:14:02.854 "zone_append": false, 00:14:02.854 "compare": true, 00:14:02.854 "compare_and_write": true, 00:14:02.854 "abort": true, 00:14:02.854 "seek_hole": false, 00:14:02.854 "seek_data": false, 00:14:02.854 "copy": true, 00:14:02.854 "nvme_iov_md": false 00:14:02.854 }, 00:14:02.854 "memory_domains": [ 00:14:02.854 { 00:14:02.854 "dma_device_id": "system", 00:14:02.854 "dma_device_type": 1 00:14:02.854 } 00:14:02.854 ], 00:14:02.854 "driver_specific": { 00:14:02.854 "nvme": [ 00:14:02.854 { 00:14:02.854 "trid": { 00:14:02.854 "trtype": "TCP", 00:14:02.854 "adrfam": "IPv4", 00:14:02.854 "traddr": "10.0.0.2", 00:14:02.854 "trsvcid": "4420", 00:14:02.854 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:02.854 }, 00:14:02.854 "ctrlr_data": { 00:14:02.854 "cntlid": 1, 00:14:02.854 "vendor_id": "0x8086", 00:14:02.854 "model_number": "SPDK bdev Controller", 00:14:02.854 "serial_number": "SPDK0", 00:14:02.854 "firmware_revision": "24.09", 00:14:02.854 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:02.854 "oacs": { 00:14:02.854 "security": 0, 00:14:02.854 "format": 0, 00:14:02.854 "firmware": 0, 00:14:02.854 "ns_manage": 0 00:14:02.854 }, 00:14:02.854 "multi_ctrlr": true, 00:14:02.854 "ana_reporting": false 00:14:02.854 }, 00:14:02.854 "vs": { 00:14:02.854 "nvme_version": "1.3" 00:14:02.854 }, 00:14:02.854 "ns_data": { 00:14:02.854 "id": 1, 00:14:02.854 "can_share": true 00:14:02.854 } 00:14:02.854 } 00:14:02.854 ], 00:14:02.854 "mp_policy": "active_passive" 00:14:02.854 } 00:14:02.854 } 00:14:02.854 ] 00:14:02.854 18:28:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3869728 00:14:02.854 18:28:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:02.854 18:28:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:02.854 Running I/O for 10 seconds... 00:14:03.809 Latency(us) 00:14:03.809 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.809 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:03.809 Nvme0n1 : 1.00 24007.00 93.78 0.00 0.00 0.00 0.00 0.00 00:14:03.809 =================================================================================================================== 00:14:03.809 Total : 24007.00 93.78 0.00 0.00 0.00 0.00 0.00 00:14:03.809 00:14:04.743 18:28:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7100c10e-54bc-4bed-9aae-0d8625096405 00:14:04.743 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:04.743 Nvme0n1 : 2.00 24039.00 93.90 0.00 0.00 0.00 0.00 0.00 00:14:04.743 =================================================================================================================== 00:14:04.743 Total : 24039.00 93.90 0.00 0.00 0.00 0.00 0.00 00:14:04.743 00:14:05.001 true 00:14:05.001 18:28:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7100c10e-54bc-4bed-9aae-0d8625096405 00:14:05.001 18:28:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:05.260 18:28:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:05.260 18:28:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:05.260 18:28:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3869728 00:14:05.826 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:05.826 Nvme0n1 : 3.00 24050.67 93.95 0.00 0.00 0.00 0.00 0.00 00:14:05.826 =================================================================================================================== 00:14:05.826 Total : 24050.67 93.95 0.00 0.00 0.00 0.00 0.00 00:14:05.826 00:14:06.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:06.761 Nvme0n1 : 4.00 24134.00 94.27 0.00 0.00 0.00 0.00 0.00 00:14:06.761 =================================================================================================================== 00:14:06.761 Total : 24134.00 94.27 0.00 0.00 0.00 0.00 0.00 00:14:06.761 00:14:08.137 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:08.137 Nvme0n1 : 5.00 24199.00 94.53 0.00 0.00 0.00 0.00 0.00 00:14:08.137 =================================================================================================================== 00:14:08.137 Total : 24199.00 94.53 0.00 0.00 0.00 0.00 0.00 00:14:08.137 00:14:09.072 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:09.072 Nvme0n1 : 6.00 24241.17 94.69 0.00 0.00 0.00 0.00 0.00 00:14:09.072 =================================================================================================================== 00:14:09.072 Total : 24241.17 94.69 0.00 0.00 0.00 0.00 0.00 00:14:09.072 00:14:10.008 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:10.008 Nvme0n1 : 7.00 24262.14 94.77 0.00 0.00 0.00 0.00 0.00 00:14:10.008 =================================================================================================================== 00:14:10.008 Total : 24262.14 94.77 0.00 0.00 0.00 0.00 0.00 00:14:10.008 00:14:10.951 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:10.951 Nvme0n1 : 8.00 24286.25 94.87 0.00 0.00 0.00 0.00 0.00 00:14:10.951 =================================================================================================================== 00:14:10.951 Total : 24286.25 94.87 0.00 0.00 0.00 0.00 0.00 00:14:10.951 00:14:11.887 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:11.887 Nvme0n1 : 9.00 24304.44 94.94 0.00 0.00 0.00 0.00 0.00 00:14:11.887 =================================================================================================================== 00:14:11.887 Total : 24304.44 94.94 0.00 0.00 0.00 0.00 0.00 00:14:11.887 00:14:12.823 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:12.823 Nvme0n1 : 10.00 24317.60 94.99 0.00 0.00 0.00 0.00 0.00 00:14:12.823 =================================================================================================================== 00:14:12.823 Total : 24317.60 94.99 0.00 0.00 0.00 0.00 0.00 00:14:12.823 00:14:12.823 00:14:12.823 Latency(us) 00:14:12.823 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.823 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:12.823 Nvme0n1 : 10.00 24319.25 95.00 0.00 0.00 5260.30 1451.15 12295.80 00:14:12.823 =================================================================================================================== 00:14:12.823 Total : 24319.25 95.00 0.00 0.00 5260.30 1451.15 12295.80 00:14:12.823 0 00:14:12.823 18:28:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3869496 00:14:12.823 18:28:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 3869496 ']' 00:14:12.823 18:28:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 3869496 00:14:12.823 18:28:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:14:12.823 18:28:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:12.823 18:28:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3869496 00:14:12.823 18:28:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:12.823 18:28:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:12.823 18:28:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3869496' 00:14:12.823 killing process with pid 3869496 00:14:12.823 18:28:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 3869496 00:14:12.823 Received shutdown signal, test time was about 10.000000 seconds 00:14:12.823 00:14:12.823 Latency(us) 00:14:12.823 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.823 =================================================================================================================== 00:14:12.823 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:13.082 18:28:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 3869496 00:14:13.082 18:28:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:13.341 18:28:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:13.600 18:28:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7100c10e-54bc-4bed-9aae-0d8625096405 00:14:13.600 18:28:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:13.600 18:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:13.600 18:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:13.600 18:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3866397 00:14:13.600 18:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3866397 00:14:13.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3866397 Killed "${NVMF_APP[@]}" "$@" 00:14:13.859 18:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:13.859 18:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:13.859 18:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:13.859 18:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:13.859 18:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:13.859 18:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3871589 00:14:13.859 18:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3871589 00:14:13.859 18:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:13.859 18:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3871589 ']' 00:14:13.859 18:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.859 18:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:13.859 18:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.859 18:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:13.859 18:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:13.859 [2024-07-15 18:28:59.258672] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:14:13.859 [2024-07-15 18:28:59.258718] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.859 EAL: No free 2048 kB hugepages reported on node 1 00:14:13.859 [2024-07-15 18:28:59.326125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.859 [2024-07-15 18:28:59.397125] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:13.859 [2024-07-15 18:28:59.397160] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:13.859 [2024-07-15 18:28:59.397167] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:13.859 [2024-07-15 18:28:59.397173] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:13.859 [2024-07-15 18:28:59.397177] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:13.859 [2024-07-15 18:28:59.397194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.796 18:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:14.796 18:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:14.796 18:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:14.796 18:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:14.796 18:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:14.796 18:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.796 18:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:14.796 [2024-07-15 18:29:00.238054] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:14.796 [2024-07-15 18:29:00.238133] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:14.796 [2024-07-15 18:29:00.238156] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:14.796 18:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:14.796 18:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a8aea039-8cd5-425b-a7ef-7c8490c2f01e 00:14:14.796 18:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=a8aea039-8cd5-425b-a7ef-7c8490c2f01e 00:14:14.796 18:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:14.796 18:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:14.796 18:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:14.796 18:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:14.796 18:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:15.055 18:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a8aea039-8cd5-425b-a7ef-7c8490c2f01e -t 2000 00:14:15.055 [ 00:14:15.055 { 00:14:15.055 "name": "a8aea039-8cd5-425b-a7ef-7c8490c2f01e", 00:14:15.055 "aliases": [ 00:14:15.055 "lvs/lvol" 00:14:15.055 ], 00:14:15.055 "product_name": "Logical Volume", 00:14:15.055 "block_size": 4096, 00:14:15.055 "num_blocks": 38912, 00:14:15.055 "uuid": "a8aea039-8cd5-425b-a7ef-7c8490c2f01e", 00:14:15.055 "assigned_rate_limits": { 00:14:15.055 "rw_ios_per_sec": 0, 00:14:15.055 "rw_mbytes_per_sec": 0, 00:14:15.055 "r_mbytes_per_sec": 0, 00:14:15.055 "w_mbytes_per_sec": 0 00:14:15.055 }, 00:14:15.055 "claimed": false, 00:14:15.055 "zoned": false, 00:14:15.055 "supported_io_types": { 00:14:15.055 "read": true, 00:14:15.055 "write": true, 00:14:15.055 "unmap": true, 00:14:15.055 "flush": false, 00:14:15.055 "reset": true, 00:14:15.055 "nvme_admin": false, 00:14:15.055 "nvme_io": false, 00:14:15.055 "nvme_io_md": false, 00:14:15.055 "write_zeroes": true, 00:14:15.055 "zcopy": false, 00:14:15.055 "get_zone_info": false, 00:14:15.055 "zone_management": false, 00:14:15.055 "zone_append": false, 00:14:15.055 "compare": false, 00:14:15.055 "compare_and_write": false, 00:14:15.055 "abort": false, 00:14:15.055 "seek_hole": true, 00:14:15.055 "seek_data": true, 00:14:15.055 "copy": false, 00:14:15.055 "nvme_iov_md": false 00:14:15.055 }, 00:14:15.055 "driver_specific": { 00:14:15.055 "lvol": { 00:14:15.055 "lvol_store_uuid": "7100c10e-54bc-4bed-9aae-0d8625096405", 00:14:15.055 "base_bdev": "aio_bdev", 00:14:15.055 "thin_provision": false, 00:14:15.055 "num_allocated_clusters": 38, 00:14:15.055 "snapshot": false, 00:14:15.055 "clone": false, 00:14:15.055 "esnap_clone": false 00:14:15.055 } 00:14:15.055 } 00:14:15.055 } 00:14:15.055 ] 00:14:15.055 18:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:15.055 18:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:15.055 18:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7100c10e-54bc-4bed-9aae-0d8625096405 00:14:15.316 18:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:15.317 18:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7100c10e-54bc-4bed-9aae-0d8625096405 00:14:15.317 18:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:15.576 18:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:15.576 18:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:15.576 [2024-07-15 18:29:01.114822] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:15.835 18:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7100c10e-54bc-4bed-9aae-0d8625096405 00:14:15.835 18:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:14:15.835 18:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7100c10e-54bc-4bed-9aae-0d8625096405 00:14:15.835 18:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:15.835 18:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:15.835 18:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:15.835 18:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:15.835 18:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:15.835 18:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:15.835 18:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:15.835 18:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:15.835 18:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7100c10e-54bc-4bed-9aae-0d8625096405 00:14:15.835 request: 00:14:15.835 { 00:14:15.835 "uuid": "7100c10e-54bc-4bed-9aae-0d8625096405", 00:14:15.835 "method": "bdev_lvol_get_lvstores", 00:14:15.835 "req_id": 1 00:14:15.835 } 00:14:15.835 Got JSON-RPC error response 00:14:15.835 response: 00:14:15.835 { 00:14:15.835 "code": -19, 00:14:15.835 "message": "No such device" 00:14:15.835 } 00:14:15.835 18:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:14:15.835 18:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:15.835 18:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:15.835 18:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:15.835 18:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:16.094 aio_bdev 00:14:16.094 18:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a8aea039-8cd5-425b-a7ef-7c8490c2f01e 00:14:16.094 18:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=a8aea039-8cd5-425b-a7ef-7c8490c2f01e 00:14:16.094 18:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:16.094 18:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:16.094 18:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:16.094 18:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:16.094 18:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:16.094 18:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a8aea039-8cd5-425b-a7ef-7c8490c2f01e -t 2000 00:14:16.351 [ 00:14:16.351 { 00:14:16.351 "name": "a8aea039-8cd5-425b-a7ef-7c8490c2f01e", 00:14:16.351 "aliases": [ 00:14:16.351 "lvs/lvol" 00:14:16.351 ], 00:14:16.351 "product_name": "Logical Volume", 00:14:16.351 "block_size": 4096, 00:14:16.351 "num_blocks": 38912, 00:14:16.351 "uuid": "a8aea039-8cd5-425b-a7ef-7c8490c2f01e", 00:14:16.351 "assigned_rate_limits": { 00:14:16.351 "rw_ios_per_sec": 0, 00:14:16.351 "rw_mbytes_per_sec": 0, 00:14:16.351 "r_mbytes_per_sec": 0, 00:14:16.351 "w_mbytes_per_sec": 0 00:14:16.352 }, 00:14:16.352 "claimed": false, 00:14:16.352 "zoned": false, 00:14:16.352 "supported_io_types": { 00:14:16.352 "read": true, 00:14:16.352 "write": true, 00:14:16.352 "unmap": true, 00:14:16.352 "flush": false, 00:14:16.352 "reset": true, 00:14:16.352 "nvme_admin": false, 00:14:16.352 "nvme_io": false, 00:14:16.352 "nvme_io_md": false, 00:14:16.352 "write_zeroes": true, 00:14:16.352 "zcopy": false, 00:14:16.352 "get_zone_info": false, 00:14:16.352 "zone_management": false, 00:14:16.352 "zone_append": false, 00:14:16.352 "compare": false, 00:14:16.352 "compare_and_write": false, 00:14:16.352 "abort": false, 00:14:16.352 "seek_hole": true, 00:14:16.352 "seek_data": true, 00:14:16.352 "copy": false, 00:14:16.352 "nvme_iov_md": false 00:14:16.352 }, 00:14:16.352 "driver_specific": { 00:14:16.352 "lvol": { 00:14:16.352 "lvol_store_uuid": "7100c10e-54bc-4bed-9aae-0d8625096405", 00:14:16.352 "base_bdev": "aio_bdev", 00:14:16.352 "thin_provision": false, 00:14:16.352 "num_allocated_clusters": 38, 00:14:16.352 "snapshot": false, 00:14:16.352 "clone": false, 00:14:16.352 "esnap_clone": false 00:14:16.352 } 00:14:16.352 } 00:14:16.352 } 00:14:16.352 ] 00:14:16.352 18:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:16.352 18:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7100c10e-54bc-4bed-9aae-0d8625096405 00:14:16.352 18:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:16.609 18:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:16.609 18:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7100c10e-54bc-4bed-9aae-0d8625096405 00:14:16.609 18:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:16.609 18:29:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:16.609 18:29:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a8aea039-8cd5-425b-a7ef-7c8490c2f01e 00:14:16.867 18:29:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7100c10e-54bc-4bed-9aae-0d8625096405 00:14:17.125 18:29:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:17.125 18:29:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:17.384 00:14:17.384 real 0m17.504s 00:14:17.384 user 0m44.941s 00:14:17.384 sys 0m3.645s 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:17.384 ************************************ 00:14:17.384 END TEST lvs_grow_dirty 00:14:17.384 ************************************ 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:17.384 nvmf_trace.0 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:17.384 rmmod nvme_tcp 00:14:17.384 rmmod nvme_fabrics 00:14:17.384 rmmod nvme_keyring 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3871589 ']' 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3871589 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 3871589 ']' 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 3871589 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3871589 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3871589' 00:14:17.384 killing process with pid 3871589 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 3871589 00:14:17.384 18:29:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 3871589 00:14:17.643 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:17.643 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:17.643 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:17.643 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:17.643 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:17.643 18:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.643 18:29:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:17.643 18:29:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.566 18:29:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:19.825 00:14:19.825 real 0m42.703s 00:14:19.825 user 1m6.122s 00:14:19.825 sys 0m9.893s 00:14:19.825 18:29:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:19.825 18:29:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:19.825 ************************************ 00:14:19.825 END TEST nvmf_lvs_grow 00:14:19.825 ************************************ 00:14:19.825 18:29:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:19.825 18:29:05 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:19.825 18:29:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:19.825 18:29:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:19.825 18:29:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:19.825 ************************************ 00:14:19.825 START TEST nvmf_bdev_io_wait 00:14:19.825 ************************************ 00:14:19.825 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:19.825 * Looking for test storage... 00:14:19.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:19.825 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:19.825 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:19.825 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.825 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.825 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.825 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.825 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.825 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.825 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.825 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.825 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.825 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.825 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:19.825 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:19.825 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.825 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.825 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:19.825 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:19.825 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:19.825 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.825 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.825 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.825 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.825 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.825 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.825 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:19.826 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.826 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:14:19.826 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:19.826 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:19.826 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:19.826 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.826 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.826 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:19.826 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:19.826 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:19.826 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:19.826 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:19.826 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:19.826 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:19.826 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:19.826 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:19.826 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:19.826 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:19.826 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.826 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.826 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.826 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:19.826 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:19.826 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:14:19.826 18:29:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:26.395 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:26.395 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:26.395 Found net devices under 0000:86:00.0: cvl_0_0 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:26.395 Found net devices under 0000:86:00.1: cvl_0_1 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:26.395 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:26.396 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:26.396 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:26.396 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:26.396 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:26.396 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:26.396 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:26.396 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:26.396 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:26.396 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:26.396 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:26.396 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:26.396 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:26.396 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:26.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:26.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:14:26.396 00:14:26.396 --- 10.0.0.2 ping statistics --- 00:14:26.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.396 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:14:26.396 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:26.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:26.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:14:26.396 00:14:26.396 --- 10.0.0.1 ping statistics --- 00:14:26.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.396 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:14:26.396 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:26.396 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:14:26.396 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:26.396 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:26.396 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:26.396 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:26.396 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:26.396 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:26.396 18:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:26.396 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:26.396 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:26.396 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:26.396 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.396 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3875643 00:14:26.396 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:26.396 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3875643 00:14:26.396 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 3875643 ']' 00:14:26.396 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.396 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:26.396 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.396 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:26.396 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.396 [2024-07-15 18:29:11.070358] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:14:26.396 [2024-07-15 18:29:11.070403] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.396 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.396 [2024-07-15 18:29:11.141356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:26.396 [2024-07-15 18:29:11.222536] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.396 [2024-07-15 18:29:11.222567] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.396 [2024-07-15 18:29:11.222574] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:26.396 [2024-07-15 18:29:11.222580] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:26.396 [2024-07-15 18:29:11.222585] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.396 [2024-07-15 18:29:11.222639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.396 [2024-07-15 18:29:11.222749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:26.396 [2024-07-15 18:29:11.222778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.396 [2024-07-15 18:29:11.222779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:26.396 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:26.396 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:14:26.396 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:26.396 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:26.396 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.396 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.396 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:26.396 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.396 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.396 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.396 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:26.396 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.396 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.655 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.655 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:26.655 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.655 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.655 [2024-07-15 18:29:11.988383] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:26.655 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.655 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:26.655 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.655 18:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.655 Malloc0 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.655 [2024-07-15 18:29:12.044948] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3875893 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3875895 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:26.655 { 00:14:26.655 "params": { 00:14:26.655 "name": "Nvme$subsystem", 00:14:26.655 "trtype": "$TEST_TRANSPORT", 00:14:26.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:26.655 "adrfam": "ipv4", 00:14:26.655 "trsvcid": "$NVMF_PORT", 00:14:26.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:26.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:26.655 "hdgst": ${hdgst:-false}, 00:14:26.655 "ddgst": ${ddgst:-false} 00:14:26.655 }, 00:14:26.655 "method": "bdev_nvme_attach_controller" 00:14:26.655 } 00:14:26.655 EOF 00:14:26.655 )") 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3875897 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:26.655 { 00:14:26.655 "params": { 00:14:26.655 "name": "Nvme$subsystem", 00:14:26.655 "trtype": "$TEST_TRANSPORT", 00:14:26.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:26.655 "adrfam": "ipv4", 00:14:26.655 "trsvcid": "$NVMF_PORT", 00:14:26.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:26.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:26.655 "hdgst": ${hdgst:-false}, 00:14:26.655 "ddgst": ${ddgst:-false} 00:14:26.655 }, 00:14:26.655 "method": "bdev_nvme_attach_controller" 00:14:26.655 } 00:14:26.655 EOF 00:14:26.655 )") 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3875900 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:26.655 { 00:14:26.655 "params": { 00:14:26.655 "name": "Nvme$subsystem", 00:14:26.655 "trtype": "$TEST_TRANSPORT", 00:14:26.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:26.655 "adrfam": "ipv4", 00:14:26.655 "trsvcid": "$NVMF_PORT", 00:14:26.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:26.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:26.655 "hdgst": ${hdgst:-false}, 00:14:26.655 "ddgst": ${ddgst:-false} 00:14:26.655 }, 00:14:26.655 "method": "bdev_nvme_attach_controller" 00:14:26.655 } 00:14:26.655 EOF 00:14:26.655 )") 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:26.655 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:26.656 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:26.656 { 00:14:26.656 "params": { 00:14:26.656 "name": "Nvme$subsystem", 00:14:26.656 "trtype": "$TEST_TRANSPORT", 00:14:26.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:26.656 "adrfam": "ipv4", 00:14:26.656 "trsvcid": "$NVMF_PORT", 00:14:26.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:26.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:26.656 "hdgst": ${hdgst:-false}, 00:14:26.656 "ddgst": ${ddgst:-false} 00:14:26.656 }, 00:14:26.656 "method": "bdev_nvme_attach_controller" 00:14:26.656 } 00:14:26.656 EOF 00:14:26.656 )") 00:14:26.656 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:26.656 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3875893 00:14:26.656 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:26.656 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:26.656 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:26.656 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:26.656 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:26.656 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:26.656 "params": { 00:14:26.656 "name": "Nvme1", 00:14:26.656 "trtype": "tcp", 00:14:26.656 "traddr": "10.0.0.2", 00:14:26.656 "adrfam": "ipv4", 00:14:26.656 "trsvcid": "4420", 00:14:26.656 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.656 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:26.656 "hdgst": false, 00:14:26.656 "ddgst": false 00:14:26.656 }, 00:14:26.656 "method": "bdev_nvme_attach_controller" 00:14:26.656 }' 00:14:26.656 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:26.656 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:26.656 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:26.656 "params": { 00:14:26.656 "name": "Nvme1", 00:14:26.656 "trtype": "tcp", 00:14:26.656 "traddr": "10.0.0.2", 00:14:26.656 "adrfam": "ipv4", 00:14:26.656 "trsvcid": "4420", 00:14:26.656 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.656 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:26.656 "hdgst": false, 00:14:26.656 "ddgst": false 00:14:26.656 }, 00:14:26.656 "method": "bdev_nvme_attach_controller" 00:14:26.656 }' 00:14:26.656 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:26.656 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:26.656 "params": { 00:14:26.656 "name": "Nvme1", 00:14:26.656 "trtype": "tcp", 00:14:26.656 "traddr": "10.0.0.2", 00:14:26.656 "adrfam": "ipv4", 00:14:26.656 "trsvcid": "4420", 00:14:26.656 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.656 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:26.656 "hdgst": false, 00:14:26.656 "ddgst": false 00:14:26.656 }, 00:14:26.656 "method": "bdev_nvme_attach_controller" 00:14:26.656 }' 00:14:26.656 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:26.656 18:29:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:26.656 "params": { 00:14:26.656 "name": "Nvme1", 00:14:26.656 "trtype": "tcp", 00:14:26.656 "traddr": "10.0.0.2", 00:14:26.656 "adrfam": "ipv4", 00:14:26.656 "trsvcid": "4420", 00:14:26.656 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.656 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:26.656 "hdgst": false, 00:14:26.656 "ddgst": false 00:14:26.656 }, 00:14:26.656 "method": "bdev_nvme_attach_controller" 00:14:26.656 }' 00:14:26.656 [2024-07-15 18:29:12.093802] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:14:26.656 [2024-07-15 18:29:12.093850] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:26.656 [2024-07-15 18:29:12.097140] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:14:26.656 [2024-07-15 18:29:12.097183] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:26.656 [2024-07-15 18:29:12.098353] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:14:26.656 [2024-07-15 18:29:12.098353] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:14:26.656 [2024-07-15 18:29:12.098397] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 18:29:12.098397] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:26.656 --proc-type=auto ] 00:14:26.656 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.914 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.914 [2024-07-15 18:29:12.269938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.914 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.914 [2024-07-15 18:29:12.346543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:26.914 [2024-07-15 18:29:12.364486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.914 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.914 [2024-07-15 18:29:12.430121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.914 [2024-07-15 18:29:12.454558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:27.172 [2024-07-15 18:29:12.479994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.172 [2024-07-15 18:29:12.506986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:27.172 [2024-07-15 18:29:12.552346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:14:27.172 Running I/O for 1 seconds... 00:14:27.431 Running I/O for 1 seconds... 00:14:27.431 Running I/O for 1 seconds... 00:14:27.431 Running I/O for 1 seconds... 00:14:28.365 00:14:28.365 Latency(us) 00:14:28.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.365 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:28.365 Nvme1n1 : 1.01 12253.34 47.86 0.00 0.00 10407.26 6397.56 17101.78 00:14:28.365 =================================================================================================================== 00:14:28.365 Total : 12253.34 47.86 0.00 0.00 10407.26 6397.56 17101.78 00:14:28.365 00:14:28.365 Latency(us) 00:14:28.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.365 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:28.365 Nvme1n1 : 1.01 10361.69 40.48 0.00 0.00 12311.90 5679.79 20971.52 00:14:28.365 =================================================================================================================== 00:14:28.365 Total : 10361.69 40.48 0.00 0.00 12311.90 5679.79 20971.52 00:14:28.365 00:14:28.365 Latency(us) 00:14:28.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.365 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:28.365 Nvme1n1 : 1.00 251286.16 981.59 0.00 0.00 507.11 202.85 674.86 00:14:28.365 =================================================================================================================== 00:14:28.365 Total : 251286.16 981.59 0.00 0.00 507.11 202.85 674.86 00:14:28.365 00:14:28.365 Latency(us) 00:14:28.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.365 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:28.365 Nvme1n1 : 1.00 11260.65 43.99 0.00 0.00 11337.45 4493.90 23967.45 00:14:28.365 =================================================================================================================== 00:14:28.365 Total : 11260.65 43.99 0.00 0.00 11337.45 4493.90 23967.45 00:14:28.624 18:29:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3875895 00:14:28.624 18:29:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3875897 00:14:28.624 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3875900 00:14:28.624 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:28.624 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.624 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:28.624 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.624 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:28.624 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:28.624 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:28.624 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:14:28.624 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:28.624 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:14:28.624 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:28.624 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:28.624 rmmod nvme_tcp 00:14:28.624 rmmod nvme_fabrics 00:14:28.624 rmmod nvme_keyring 00:14:28.624 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:28.624 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:14:28.624 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:14:28.624 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3875643 ']' 00:14:28.624 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3875643 00:14:28.624 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 3875643 ']' 00:14:28.624 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 3875643 00:14:28.624 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:14:28.624 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:28.624 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3875643 00:14:28.624 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:28.624 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:28.624 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3875643' 00:14:28.624 killing process with pid 3875643 00:14:28.625 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 3875643 00:14:28.884 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 3875643 00:14:28.884 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:28.884 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:28.884 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:28.884 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:28.884 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:28.884 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.884 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:28.884 18:29:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.501 18:29:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:31.501 00:14:31.501 real 0m11.228s 00:14:31.501 user 0m19.662s 00:14:31.501 sys 0m6.042s 00:14:31.501 18:29:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:31.501 18:29:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:31.501 ************************************ 00:14:31.501 END TEST nvmf_bdev_io_wait 00:14:31.501 ************************************ 00:14:31.501 18:29:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:31.501 18:29:16 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:31.501 18:29:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:31.501 18:29:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:31.501 18:29:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:31.501 ************************************ 00:14:31.501 START TEST nvmf_queue_depth 00:14:31.501 ************************************ 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:31.501 * Looking for test storage... 00:14:31.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:14:31.501 18:29:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:36.773 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:36.773 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:36.773 Found net devices under 0000:86:00.0: cvl_0_0 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:36.773 Found net devices under 0000:86:00.1: cvl_0_1 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:36.773 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:37.032 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:37.032 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:37.032 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:37.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:37.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:14:37.032 00:14:37.032 --- 10.0.0.2 ping statistics --- 00:14:37.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.032 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:14:37.032 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:37.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:37.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:14:37.032 00:14:37.032 --- 10.0.0.1 ping statistics --- 00:14:37.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.032 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:14:37.032 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:37.032 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:14:37.032 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:37.032 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:37.032 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:37.032 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:37.032 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:37.032 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:37.032 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:37.032 18:29:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:37.032 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:37.032 18:29:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:37.032 18:29:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:37.032 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3879679 00:14:37.032 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3879679 00:14:37.032 18:29:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:37.032 18:29:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3879679 ']' 00:14:37.032 18:29:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.032 18:29:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:37.032 18:29:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.032 18:29:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:37.032 18:29:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:37.032 [2024-07-15 18:29:22.447464] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:14:37.032 [2024-07-15 18:29:22.447509] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:37.032 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.032 [2024-07-15 18:29:22.514368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.290 [2024-07-15 18:29:22.590986] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:37.290 [2024-07-15 18:29:22.591020] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:37.290 [2024-07-15 18:29:22.591027] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:37.290 [2024-07-15 18:29:22.591032] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:37.290 [2024-07-15 18:29:22.591037] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:37.290 [2024-07-15 18:29:22.591054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:37.856 [2024-07-15 18:29:23.281516] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:37.856 Malloc0 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:37.856 [2024-07-15 18:29:23.343923] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3879924 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3879924 /var/tmp/bdevperf.sock 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3879924 ']' 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:37.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:37.856 18:29:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:37.856 [2024-07-15 18:29:23.394618] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:14:37.856 [2024-07-15 18:29:23.394658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3879924 ] 00:14:38.114 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.114 [2024-07-15 18:29:23.461285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.114 [2024-07-15 18:29:23.533659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.682 18:29:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:38.682 18:29:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:38.682 18:29:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:38.682 18:29:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.682 18:29:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:38.940 NVMe0n1 00:14:38.940 18:29:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.940 18:29:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:38.940 Running I/O for 10 seconds... 00:14:48.940 00:14:48.940 Latency(us) 00:14:48.940 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.940 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:48.940 Verification LBA range: start 0x0 length 0x4000 00:14:48.940 NVMe0n1 : 10.05 12808.83 50.03 0.00 0.00 79704.04 15042.07 49932.19 00:14:48.940 =================================================================================================================== 00:14:48.940 Total : 12808.83 50.03 0.00 0.00 79704.04 15042.07 49932.19 00:14:48.940 0 00:14:48.940 18:29:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3879924 00:14:48.940 18:29:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3879924 ']' 00:14:48.940 18:29:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3879924 00:14:48.940 18:29:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:48.940 18:29:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:48.940 18:29:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3879924 00:14:48.940 18:29:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:48.940 18:29:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:48.940 18:29:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3879924' 00:14:48.940 killing process with pid 3879924 00:14:48.940 18:29:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3879924 00:14:48.940 Received shutdown signal, test time was about 10.000000 seconds 00:14:48.940 00:14:48.940 Latency(us) 00:14:48.940 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.940 =================================================================================================================== 00:14:48.940 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:48.940 18:29:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3879924 00:14:49.198 18:29:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:49.198 18:29:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:49.198 18:29:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:49.198 18:29:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:14:49.198 18:29:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:49.198 18:29:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:14:49.198 18:29:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:49.198 18:29:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:49.198 rmmod nvme_tcp 00:14:49.198 rmmod nvme_fabrics 00:14:49.198 rmmod nvme_keyring 00:14:49.198 18:29:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:49.198 18:29:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:14:49.198 18:29:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:14:49.198 18:29:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3879679 ']' 00:14:49.198 18:29:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3879679 00:14:49.198 18:29:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3879679 ']' 00:14:49.198 18:29:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3879679 00:14:49.198 18:29:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:49.198 18:29:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:49.198 18:29:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3879679 00:14:49.456 18:29:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:49.456 18:29:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:49.456 18:29:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3879679' 00:14:49.456 killing process with pid 3879679 00:14:49.456 18:29:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3879679 00:14:49.456 18:29:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3879679 00:14:49.456 18:29:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:49.456 18:29:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:49.456 18:29:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:49.456 18:29:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:49.456 18:29:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:49.456 18:29:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.456 18:29:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:49.456 18:29:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.990 18:29:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:51.990 00:14:51.990 real 0m20.543s 00:14:51.990 user 0m24.772s 00:14:51.990 sys 0m5.891s 00:14:51.990 18:29:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:51.990 18:29:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:51.990 ************************************ 00:14:51.990 END TEST nvmf_queue_depth 00:14:51.990 ************************************ 00:14:51.990 18:29:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:51.990 18:29:37 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:51.990 18:29:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:51.990 18:29:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:51.990 18:29:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:51.990 ************************************ 00:14:51.990 START TEST nvmf_target_multipath 00:14:51.990 ************************************ 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:51.990 * Looking for test storage... 00:14:51.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:51.990 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.991 18:29:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.991 18:29:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.991 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:51.991 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:51.991 18:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:14:51.991 18:29:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:57.267 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:57.267 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:57.267 Found net devices under 0000:86:00.0: cvl_0_0 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:57.267 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:57.268 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:57.268 Found net devices under 0000:86:00.1: cvl_0_1 00:14:57.268 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:57.268 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:57.268 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:14:57.268 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:57.268 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:57.268 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:57.268 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:57.268 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:57.268 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:57.268 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:57.268 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:57.268 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:57.268 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:57.268 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:57.268 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:57.268 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:57.268 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:57.268 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:57.268 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:57.268 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:57.527 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:57.527 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:57.527 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:57.527 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:57.527 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:57.527 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:57.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:57.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:14:57.527 00:14:57.527 --- 10.0.0.2 ping statistics --- 00:14:57.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.527 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:14:57.527 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:57.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:57.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:14:57.527 00:14:57.527 --- 10.0.0.1 ping statistics --- 00:14:57.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.527 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:14:57.527 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:57.527 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:14:57.527 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:57.527 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:57.527 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:57.527 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:57.527 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:57.527 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:57.527 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:57.527 18:29:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:14:57.527 18:29:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:14:57.527 only one NIC for nvmf test 00:14:57.527 18:29:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:14:57.527 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:57.527 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:57.527 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:57.527 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:57.527 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:57.527 18:29:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:57.527 rmmod nvme_tcp 00:14:57.527 rmmod nvme_fabrics 00:14:57.527 rmmod nvme_keyring 00:14:57.527 18:29:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:57.527 18:29:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:57.527 18:29:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:57.527 18:29:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:57.527 18:29:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:57.527 18:29:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:57.527 18:29:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:57.527 18:29:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:57.527 18:29:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:57.527 18:29:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.527 18:29:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:57.527 18:29:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.069 18:29:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:00.069 18:29:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:15:00.069 18:29:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:15:00.069 18:29:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:00.069 18:29:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:00.069 18:29:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:00.069 18:29:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:00.069 18:29:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:00.069 18:29:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:00.069 18:29:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:00.069 18:29:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:00.069 18:29:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:00.069 18:29:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:00.070 18:29:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:00.070 18:29:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:00.070 18:29:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:00.070 18:29:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:00.070 18:29:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:00.070 18:29:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.070 18:29:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:00.070 18:29:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.070 18:29:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:00.070 00:15:00.070 real 0m8.022s 00:15:00.070 user 0m1.648s 00:15:00.070 sys 0m4.357s 00:15:00.070 18:29:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:00.070 18:29:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:00.070 ************************************ 00:15:00.070 END TEST nvmf_target_multipath 00:15:00.070 ************************************ 00:15:00.070 18:29:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:00.070 18:29:45 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:00.070 18:29:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:00.070 18:29:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:00.070 18:29:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:00.070 ************************************ 00:15:00.070 START TEST nvmf_zcopy 00:15:00.070 ************************************ 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:00.070 * Looking for test storage... 00:15:00.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:15:00.070 18:29:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:05.346 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:05.346 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:05.347 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:05.347 Found net devices under 0000:86:00.0: cvl_0_0 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:05.347 Found net devices under 0000:86:00.1: cvl_0_1 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:05.347 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:05.606 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:05.606 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:05.606 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:05.606 18:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:05.606 18:29:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:05.606 18:29:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:05.606 18:29:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:05.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:05.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:15:05.606 00:15:05.606 --- 10.0.0.2 ping statistics --- 00:15:05.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.606 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:15:05.606 18:29:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:05.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:05.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:15:05.606 00:15:05.606 --- 10.0.0.1 ping statistics --- 00:15:05.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.606 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:15:05.606 18:29:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:05.606 18:29:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:15:05.606 18:29:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:05.606 18:29:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:05.607 18:29:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:05.607 18:29:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:05.607 18:29:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:05.607 18:29:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:05.607 18:29:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:05.607 18:29:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:05.607 18:29:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:05.607 18:29:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:05.607 18:29:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:05.607 18:29:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3888679 00:15:05.607 18:29:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3888679 00:15:05.607 18:29:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:05.607 18:29:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 3888679 ']' 00:15:05.607 18:29:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.607 18:29:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:05.607 18:29:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.607 18:29:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:05.607 18:29:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:05.867 [2024-07-15 18:29:51.175711] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:15:05.867 [2024-07-15 18:29:51.175756] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.867 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.867 [2024-07-15 18:29:51.245125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.867 [2024-07-15 18:29:51.322232] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:05.867 [2024-07-15 18:29:51.322268] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:05.867 [2024-07-15 18:29:51.322275] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:05.867 [2024-07-15 18:29:51.322281] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:05.867 [2024-07-15 18:29:51.322286] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:05.867 [2024-07-15 18:29:51.322303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.435 18:29:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:06.435 18:29:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:15:06.435 18:29:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:06.435 18:29:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:06.435 18:29:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:06.695 18:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:06.695 18:29:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:06.695 18:29:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:06.695 18:29:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.695 18:29:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:06.695 [2024-07-15 18:29:52.016209] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:06.695 18:29:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.695 18:29:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:06.695 18:29:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.696 18:29:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:06.696 18:29:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.696 18:29:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:06.696 18:29:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.696 18:29:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:06.696 [2024-07-15 18:29:52.036332] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:06.696 18:29:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.696 18:29:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:06.696 18:29:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.696 18:29:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:06.696 18:29:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.696 18:29:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:06.696 18:29:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.696 18:29:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:06.696 malloc0 00:15:06.696 18:29:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.696 18:29:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:06.696 18:29:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.696 18:29:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:06.696 18:29:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.696 18:29:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:06.696 18:29:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:06.696 18:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:06.696 18:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:06.696 18:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:06.696 18:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:06.696 { 00:15:06.696 "params": { 00:15:06.696 "name": "Nvme$subsystem", 00:15:06.696 "trtype": "$TEST_TRANSPORT", 00:15:06.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:06.696 "adrfam": "ipv4", 00:15:06.696 "trsvcid": "$NVMF_PORT", 00:15:06.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:06.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:06.696 "hdgst": ${hdgst:-false}, 00:15:06.696 "ddgst": ${ddgst:-false} 00:15:06.696 }, 00:15:06.696 "method": "bdev_nvme_attach_controller" 00:15:06.696 } 00:15:06.696 EOF 00:15:06.696 )") 00:15:06.696 18:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:06.696 18:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:06.696 18:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:06.696 18:29:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:06.696 "params": { 00:15:06.696 "name": "Nvme1", 00:15:06.696 "trtype": "tcp", 00:15:06.696 "traddr": "10.0.0.2", 00:15:06.696 "adrfam": "ipv4", 00:15:06.696 "trsvcid": "4420", 00:15:06.696 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.696 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:06.696 "hdgst": false, 00:15:06.696 "ddgst": false 00:15:06.696 }, 00:15:06.696 "method": "bdev_nvme_attach_controller" 00:15:06.696 }' 00:15:06.696 [2024-07-15 18:29:52.111924] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:15:06.696 [2024-07-15 18:29:52.111966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3888817 ] 00:15:06.696 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.696 [2024-07-15 18:29:52.178359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.696 [2024-07-15 18:29:52.250636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.954 Running I/O for 10 seconds... 00:15:16.955 00:15:16.955 Latency(us) 00:15:16.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.955 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:16.955 Verification LBA range: start 0x0 length 0x1000 00:15:16.955 Nvme1n1 : 10.01 8844.97 69.10 0.00 0.00 14429.99 553.94 25590.25 00:15:16.955 =================================================================================================================== 00:15:16.955 Total : 8844.97 69.10 0.00 0.00 14429.99 553.94 25590.25 00:15:17.228 18:30:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3890769 00:15:17.228 18:30:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:15:17.228 18:30:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:17.228 18:30:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:17.228 18:30:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:17.228 18:30:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:17.228 18:30:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:17.228 18:30:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:17.228 18:30:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:17.228 { 00:15:17.228 "params": { 00:15:17.228 "name": "Nvme$subsystem", 00:15:17.228 "trtype": "$TEST_TRANSPORT", 00:15:17.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:17.228 "adrfam": "ipv4", 00:15:17.228 "trsvcid": "$NVMF_PORT", 00:15:17.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:17.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:17.228 "hdgst": ${hdgst:-false}, 00:15:17.228 "ddgst": ${ddgst:-false} 00:15:17.228 }, 00:15:17.228 "method": "bdev_nvme_attach_controller" 00:15:17.228 } 00:15:17.228 EOF 00:15:17.228 )") 00:15:17.228 [2024-07-15 18:30:02.622096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.228 [2024-07-15 18:30:02.622126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.228 18:30:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:17.228 18:30:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:17.228 [2024-07-15 18:30:02.630083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.228 [2024-07-15 18:30:02.630094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.228 18:30:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:17.228 18:30:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:17.228 "params": { 00:15:17.228 "name": "Nvme1", 00:15:17.228 "trtype": "tcp", 00:15:17.228 "traddr": "10.0.0.2", 00:15:17.228 "adrfam": "ipv4", 00:15:17.228 "trsvcid": "4420", 00:15:17.228 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:17.228 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:17.228 "hdgst": false, 00:15:17.228 "ddgst": false 00:15:17.228 }, 00:15:17.228 "method": "bdev_nvme_attach_controller" 00:15:17.228 }' 00:15:17.228 [2024-07-15 18:30:02.638100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.228 [2024-07-15 18:30:02.638111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.228 [2024-07-15 18:30:02.646123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.228 [2024-07-15 18:30:02.646132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.228 [2024-07-15 18:30:02.654145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.228 [2024-07-15 18:30:02.654155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.228 [2024-07-15 18:30:02.662165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.228 [2024-07-15 18:30:02.662175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.228 [2024-07-15 18:30:02.664374] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:15:17.228 [2024-07-15 18:30:02.664417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3890769 ] 00:15:17.228 [2024-07-15 18:30:02.674200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.229 [2024-07-15 18:30:02.674212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.229 [2024-07-15 18:30:02.682218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.229 [2024-07-15 18:30:02.682228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.229 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.229 [2024-07-15 18:30:02.694251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.229 [2024-07-15 18:30:02.694260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.229 [2024-07-15 18:30:02.702271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.229 [2024-07-15 18:30:02.702280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.229 [2024-07-15 18:30:02.710294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.229 [2024-07-15 18:30:02.710305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.229 [2024-07-15 18:30:02.718314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.229 [2024-07-15 18:30:02.718323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.229 [2024-07-15 18:30:02.726335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.229 [2024-07-15 18:30:02.726349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.229 [2024-07-15 18:30:02.726686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.229 [2024-07-15 18:30:02.734362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.229 [2024-07-15 18:30:02.734378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.229 [2024-07-15 18:30:02.742386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.229 [2024-07-15 18:30:02.742398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.229 [2024-07-15 18:30:02.750407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.229 [2024-07-15 18:30:02.750417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.229 [2024-07-15 18:30:02.758428] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.229 [2024-07-15 18:30:02.758438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.229 [2024-07-15 18:30:02.766452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.229 [2024-07-15 18:30:02.766470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.229 [2024-07-15 18:30:02.774475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.229 [2024-07-15 18:30:02.774488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.229 [2024-07-15 18:30:02.782496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.229 [2024-07-15 18:30:02.782509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.488 [2024-07-15 18:30:02.790513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.488 [2024-07-15 18:30:02.790522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.488 [2024-07-15 18:30:02.798534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.488 [2024-07-15 18:30:02.798543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.488 [2024-07-15 18:30:02.803841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.488 [2024-07-15 18:30:02.806555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.488 [2024-07-15 18:30:02.806565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.488 [2024-07-15 18:30:02.814581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.488 [2024-07-15 18:30:02.814605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.488 [2024-07-15 18:30:02.822614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.488 [2024-07-15 18:30:02.822631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.488 [2024-07-15 18:30:02.830634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.488 [2024-07-15 18:30:02.830646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.488 [2024-07-15 18:30:02.842656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.488 [2024-07-15 18:30:02.842668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.488 [2024-07-15 18:30:02.850673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.488 [2024-07-15 18:30:02.850684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.488 [2024-07-15 18:30:02.858694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.488 [2024-07-15 18:30:02.858706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.488 [2024-07-15 18:30:02.866718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.488 [2024-07-15 18:30:02.866729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.488 [2024-07-15 18:30:02.874737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.488 [2024-07-15 18:30:02.874747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.488 [2024-07-15 18:30:02.882758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.488 [2024-07-15 18:30:02.882767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.488 [2024-07-15 18:30:02.890793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.488 [2024-07-15 18:30:02.890811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.488 [2024-07-15 18:30:02.898809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.488 [2024-07-15 18:30:02.898822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.488 [2024-07-15 18:30:02.906830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.488 [2024-07-15 18:30:02.906843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.488 [2024-07-15 18:30:02.914852] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.488 [2024-07-15 18:30:02.914865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.488 [2024-07-15 18:30:02.922874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.488 [2024-07-15 18:30:02.922887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.488 [2024-07-15 18:30:02.930895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.488 [2024-07-15 18:30:02.930910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.488 [2024-07-15 18:30:02.938916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.488 [2024-07-15 18:30:02.938928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.488 [2024-07-15 18:30:02.946935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.488 [2024-07-15 18:30:02.946945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.488 [2024-07-15 18:30:02.954963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.489 [2024-07-15 18:30:02.954979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.489 Running I/O for 5 seconds... 00:15:17.489 [2024-07-15 18:30:02.962979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.489 [2024-07-15 18:30:02.962989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.489 [2024-07-15 18:30:02.975227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.489 [2024-07-15 18:30:02.975246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.489 [2024-07-15 18:30:02.984160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.489 [2024-07-15 18:30:02.984180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.489 [2024-07-15 18:30:02.993606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.489 [2024-07-15 18:30:02.993624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.489 [2024-07-15 18:30:03.002707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.489 [2024-07-15 18:30:03.002725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.489 [2024-07-15 18:30:03.012076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.489 [2024-07-15 18:30:03.012094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.489 [2024-07-15 18:30:03.021105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.489 [2024-07-15 18:30:03.021123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.489 [2024-07-15 18:30:03.029935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.489 [2024-07-15 18:30:03.029953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.489 [2024-07-15 18:30:03.039016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.489 [2024-07-15 18:30:03.039034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.748 [2024-07-15 18:30:03.047594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.748 [2024-07-15 18:30:03.047616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.748 [2024-07-15 18:30:03.057413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.748 [2024-07-15 18:30:03.057430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.748 [2024-07-15 18:30:03.065891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.748 [2024-07-15 18:30:03.065909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.748 [2024-07-15 18:30:03.075280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.748 [2024-07-15 18:30:03.075297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.748 [2024-07-15 18:30:03.083843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.748 [2024-07-15 18:30:03.083861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.748 [2024-07-15 18:30:03.093067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.748 [2024-07-15 18:30:03.093086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.748 [2024-07-15 18:30:03.102052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.748 [2024-07-15 18:30:03.102070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.748 [2024-07-15 18:30:03.111263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.748 [2024-07-15 18:30:03.111281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.748 [2024-07-15 18:30:03.120152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.748 [2024-07-15 18:30:03.120169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.748 [2024-07-15 18:30:03.129188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.748 [2024-07-15 18:30:03.129206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.748 [2024-07-15 18:30:03.138692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.748 [2024-07-15 18:30:03.138710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.748 [2024-07-15 18:30:03.147303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.748 [2024-07-15 18:30:03.147321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.748 [2024-07-15 18:30:03.156355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.748 [2024-07-15 18:30:03.156373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.748 [2024-07-15 18:30:03.165955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.748 [2024-07-15 18:30:03.165972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.748 [2024-07-15 18:30:03.175388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.748 [2024-07-15 18:30:03.175406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.748 [2024-07-15 18:30:03.184530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.748 [2024-07-15 18:30:03.184547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.748 [2024-07-15 18:30:03.193582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.748 [2024-07-15 18:30:03.193599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.748 [2024-07-15 18:30:03.202790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.748 [2024-07-15 18:30:03.202808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.748 [2024-07-15 18:30:03.211754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.748 [2024-07-15 18:30:03.211772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.748 [2024-07-15 18:30:03.220820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.748 [2024-07-15 18:30:03.220841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.748 [2024-07-15 18:30:03.229159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.748 [2024-07-15 18:30:03.229176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.748 [2024-07-15 18:30:03.238049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.748 [2024-07-15 18:30:03.238067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.748 [2024-07-15 18:30:03.247174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.748 [2024-07-15 18:30:03.247192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.748 [2024-07-15 18:30:03.256349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.748 [2024-07-15 18:30:03.256382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.748 [2024-07-15 18:30:03.265103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.748 [2024-07-15 18:30:03.265121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.748 [2024-07-15 18:30:03.274719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.748 [2024-07-15 18:30:03.274737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.748 [2024-07-15 18:30:03.283706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.748 [2024-07-15 18:30:03.283724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.748 [2024-07-15 18:30:03.293250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.749 [2024-07-15 18:30:03.293268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.749 [2024-07-15 18:30:03.302468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.749 [2024-07-15 18:30:03.302486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.008 [2024-07-15 18:30:03.311654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.008 [2024-07-15 18:30:03.311672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.008 [2024-07-15 18:30:03.320637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.008 [2024-07-15 18:30:03.320654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.008 [2024-07-15 18:30:03.330063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.008 [2024-07-15 18:30:03.330081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.008 [2024-07-15 18:30:03.339815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.008 [2024-07-15 18:30:03.339833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.008 [2024-07-15 18:30:03.349125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.008 [2024-07-15 18:30:03.349143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.008 [2024-07-15 18:30:03.357743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.008 [2024-07-15 18:30:03.357760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.008 [2024-07-15 18:30:03.367356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.008 [2024-07-15 18:30:03.367374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.008 [2024-07-15 18:30:03.375987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.008 [2024-07-15 18:30:03.376005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.008 [2024-07-15 18:30:03.385139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.008 [2024-07-15 18:30:03.385157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.008 [2024-07-15 18:30:03.394132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.008 [2024-07-15 18:30:03.394154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.008 [2024-07-15 18:30:03.403315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.008 [2024-07-15 18:30:03.403334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.008 [2024-07-15 18:30:03.412216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.008 [2024-07-15 18:30:03.412233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.008 [2024-07-15 18:30:03.420593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.008 [2024-07-15 18:30:03.420610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.008 [2024-07-15 18:30:03.429570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.008 [2024-07-15 18:30:03.429588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.008 [2024-07-15 18:30:03.438686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.008 [2024-07-15 18:30:03.438703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.008 [2024-07-15 18:30:03.446963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.008 [2024-07-15 18:30:03.446981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.008 [2024-07-15 18:30:03.455992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.008 [2024-07-15 18:30:03.456011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.008 [2024-07-15 18:30:03.465093] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.008 [2024-07-15 18:30:03.465111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.008 [2024-07-15 18:30:03.474142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.008 [2024-07-15 18:30:03.474160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.008 [2024-07-15 18:30:03.483354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.008 [2024-07-15 18:30:03.483388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.008 [2024-07-15 18:30:03.490163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.008 [2024-07-15 18:30:03.490181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.008 [2024-07-15 18:30:03.501262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.008 [2024-07-15 18:30:03.501280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.008 [2024-07-15 18:30:03.510645] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.008 [2024-07-15 18:30:03.510663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.008 [2024-07-15 18:30:03.519873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.008 [2024-07-15 18:30:03.519891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.008 [2024-07-15 18:30:03.528943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.008 [2024-07-15 18:30:03.528963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.008 [2024-07-15 18:30:03.537974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.008 [2024-07-15 18:30:03.537994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.008 [2024-07-15 18:30:03.547119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.008 [2024-07-15 18:30:03.547137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.008 [2024-07-15 18:30:03.556631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.008 [2024-07-15 18:30:03.556648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.267 [2024-07-15 18:30:03.565181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.267 [2024-07-15 18:30:03.565204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.267 [2024-07-15 18:30:03.574752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.267 [2024-07-15 18:30:03.574770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.267 [2024-07-15 18:30:03.583974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.267 [2024-07-15 18:30:03.583992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.267 [2024-07-15 18:30:03.592289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.267 [2024-07-15 18:30:03.592308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.267 [2024-07-15 18:30:03.601150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.267 [2024-07-15 18:30:03.601168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.267 [2024-07-15 18:30:03.610205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.267 [2024-07-15 18:30:03.610224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.267 [2024-07-15 18:30:03.619443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.267 [2024-07-15 18:30:03.619462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.267 [2024-07-15 18:30:03.628626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.267 [2024-07-15 18:30:03.628645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.267 [2024-07-15 18:30:03.637515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.267 [2024-07-15 18:30:03.637534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.267 [2024-07-15 18:30:03.646950] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.267 [2024-07-15 18:30:03.646970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.267 [2024-07-15 18:30:03.655692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.267 [2024-07-15 18:30:03.655711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.267 [2024-07-15 18:30:03.665022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.267 [2024-07-15 18:30:03.665040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.267 [2024-07-15 18:30:03.674614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.267 [2024-07-15 18:30:03.674633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.267 [2024-07-15 18:30:03.683041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.267 [2024-07-15 18:30:03.683060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.267 [2024-07-15 18:30:03.692640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.267 [2024-07-15 18:30:03.692660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.267 [2024-07-15 18:30:03.701897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.267 [2024-07-15 18:30:03.701916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.267 [2024-07-15 18:30:03.710402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.267 [2024-07-15 18:30:03.710421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.267 [2024-07-15 18:30:03.719326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.267 [2024-07-15 18:30:03.719354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.267 [2024-07-15 18:30:03.727690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.267 [2024-07-15 18:30:03.727708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.267 [2024-07-15 18:30:03.736748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.267 [2024-07-15 18:30:03.736766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.267 [2024-07-15 18:30:03.746348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.267 [2024-07-15 18:30:03.746366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.267 [2024-07-15 18:30:03.754907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.267 [2024-07-15 18:30:03.754926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.267 [2024-07-15 18:30:03.763977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.267 [2024-07-15 18:30:03.763995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.267 [2024-07-15 18:30:03.772366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.267 [2024-07-15 18:30:03.772384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.267 [2024-07-15 18:30:03.781236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.267 [2024-07-15 18:30:03.781255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.267 [2024-07-15 18:30:03.790410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.267 [2024-07-15 18:30:03.790429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.267 [2024-07-15 18:30:03.800123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.267 [2024-07-15 18:30:03.800142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.267 [2024-07-15 18:30:03.808701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.267 [2024-07-15 18:30:03.808719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.267 [2024-07-15 18:30:03.817516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.267 [2024-07-15 18:30:03.817535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.526 [2024-07-15 18:30:03.826729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.526 [2024-07-15 18:30:03.826748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.526 [2024-07-15 18:30:03.835384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.526 [2024-07-15 18:30:03.835404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.526 [2024-07-15 18:30:03.844487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.526 [2024-07-15 18:30:03.844505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.526 [2024-07-15 18:30:03.854256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.526 [2024-07-15 18:30:03.854274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.526 [2024-07-15 18:30:03.863542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.526 [2024-07-15 18:30:03.863562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.526 [2024-07-15 18:30:03.872904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.526 [2024-07-15 18:30:03.872923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.526 [2024-07-15 18:30:03.881423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.526 [2024-07-15 18:30:03.881442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.526 [2024-07-15 18:30:03.890532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.526 [2024-07-15 18:30:03.890550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.526 [2024-07-15 18:30:03.897456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.526 [2024-07-15 18:30:03.897475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.526 [2024-07-15 18:30:03.908071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.526 [2024-07-15 18:30:03.908089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.526 [2024-07-15 18:30:03.916768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.526 [2024-07-15 18:30:03.916786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.526 [2024-07-15 18:30:03.925865] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.526 [2024-07-15 18:30:03.925885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.526 [2024-07-15 18:30:03.935134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.526 [2024-07-15 18:30:03.935153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.526 [2024-07-15 18:30:03.944243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.526 [2024-07-15 18:30:03.944262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.526 [2024-07-15 18:30:03.953247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.526 [2024-07-15 18:30:03.953266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.526 [2024-07-15 18:30:03.962117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.526 [2024-07-15 18:30:03.962135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.526 [2024-07-15 18:30:03.971134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.526 [2024-07-15 18:30:03.971153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.526 [2024-07-15 18:30:03.980020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.526 [2024-07-15 18:30:03.980038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.526 [2024-07-15 18:30:03.989287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.526 [2024-07-15 18:30:03.989306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.526 [2024-07-15 18:30:03.998435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.526 [2024-07-15 18:30:03.998454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.526 [2024-07-15 18:30:04.006951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.526 [2024-07-15 18:30:04.006969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.526 [2024-07-15 18:30:04.016110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.526 [2024-07-15 18:30:04.016129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.526 [2024-07-15 18:30:04.024619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.526 [2024-07-15 18:30:04.024638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.526 [2024-07-15 18:30:04.033074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.526 [2024-07-15 18:30:04.033091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.526 [2024-07-15 18:30:04.042072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.526 [2024-07-15 18:30:04.042090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.526 [2024-07-15 18:30:04.051324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.526 [2024-07-15 18:30:04.051349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.526 [2024-07-15 18:30:04.060222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.526 [2024-07-15 18:30:04.060240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.526 [2024-07-15 18:30:04.070071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.526 [2024-07-15 18:30:04.070089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.526 [2024-07-15 18:30:04.078628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.526 [2024-07-15 18:30:04.078646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.785 [2024-07-15 18:30:04.088064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.785 [2024-07-15 18:30:04.088082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.785 [2024-07-15 18:30:04.096632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.785 [2024-07-15 18:30:04.096650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.785 [2024-07-15 18:30:04.105547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.785 [2024-07-15 18:30:04.105565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.785 [2024-07-15 18:30:04.114474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.785 [2024-07-15 18:30:04.114492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.785 [2024-07-15 18:30:04.123919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.785 [2024-07-15 18:30:04.123938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.785 [2024-07-15 18:30:04.132869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.785 [2024-07-15 18:30:04.132888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.785 [2024-07-15 18:30:04.141260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.785 [2024-07-15 18:30:04.141279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.785 [2024-07-15 18:30:04.149832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.785 [2024-07-15 18:30:04.149850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.785 [2024-07-15 18:30:04.158825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.785 [2024-07-15 18:30:04.158843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.785 [2024-07-15 18:30:04.168270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.785 [2024-07-15 18:30:04.168288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.785 [2024-07-15 18:30:04.177303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.785 [2024-07-15 18:30:04.177320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.785 [2024-07-15 18:30:04.186445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.785 [2024-07-15 18:30:04.186463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.785 [2024-07-15 18:30:04.195973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.785 [2024-07-15 18:30:04.195990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.785 [2024-07-15 18:30:04.204631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.785 [2024-07-15 18:30:04.204648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.785 [2024-07-15 18:30:04.213114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.785 [2024-07-15 18:30:04.213132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.785 [2024-07-15 18:30:04.222241] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.785 [2024-07-15 18:30:04.222260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.785 [2024-07-15 18:30:04.231447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.785 [2024-07-15 18:30:04.231475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.785 [2024-07-15 18:30:04.239916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.785 [2024-07-15 18:30:04.239939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.785 [2024-07-15 18:30:04.249131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.785 [2024-07-15 18:30:04.249148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.785 [2024-07-15 18:30:04.256343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.785 [2024-07-15 18:30:04.256360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.785 [2024-07-15 18:30:04.266539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.785 [2024-07-15 18:30:04.266558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.785 [2024-07-15 18:30:04.275820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.785 [2024-07-15 18:30:04.275837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.785 [2024-07-15 18:30:04.285010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.785 [2024-07-15 18:30:04.285028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.785 [2024-07-15 18:30:04.293531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.785 [2024-07-15 18:30:04.293549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.785 [2024-07-15 18:30:04.302699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.785 [2024-07-15 18:30:04.302717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.785 [2024-07-15 18:30:04.312014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.785 [2024-07-15 18:30:04.312033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.785 [2024-07-15 18:30:04.320648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.785 [2024-07-15 18:30:04.320667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.785 [2024-07-15 18:30:04.330034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.785 [2024-07-15 18:30:04.330052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.785 [2024-07-15 18:30:04.338642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.785 [2024-07-15 18:30:04.338659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.044 [2024-07-15 18:30:04.347741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.044 [2024-07-15 18:30:04.347759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.044 [2024-07-15 18:30:04.356325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.044 [2024-07-15 18:30:04.356349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.044 [2024-07-15 18:30:04.365402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.044 [2024-07-15 18:30:04.365419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.044 [2024-07-15 18:30:04.374921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.044 [2024-07-15 18:30:04.374939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.044 [2024-07-15 18:30:04.384163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.044 [2024-07-15 18:30:04.384181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.044 [2024-07-15 18:30:04.393331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.044 [2024-07-15 18:30:04.393356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.044 [2024-07-15 18:30:04.402457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.044 [2024-07-15 18:30:04.402474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.044 [2024-07-15 18:30:04.411551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.044 [2024-07-15 18:30:04.411572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.044 [2024-07-15 18:30:04.420499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.044 [2024-07-15 18:30:04.420517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.044 [2024-07-15 18:30:04.429704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.044 [2024-07-15 18:30:04.429721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.044 [2024-07-15 18:30:04.438632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.044 [2024-07-15 18:30:04.438649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.044 [2024-07-15 18:30:04.448171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.044 [2024-07-15 18:30:04.448189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.044 [2024-07-15 18:30:04.456859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.044 [2024-07-15 18:30:04.456877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.044 [2024-07-15 18:30:04.465938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.044 [2024-07-15 18:30:04.465956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.044 [2024-07-15 18:30:04.475142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.044 [2024-07-15 18:30:04.475160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.044 [2024-07-15 18:30:04.484095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.044 [2024-07-15 18:30:04.484113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.044 [2024-07-15 18:30:04.493253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.044 [2024-07-15 18:30:04.493270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.044 [2024-07-15 18:30:04.502447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.044 [2024-07-15 18:30:04.502465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.044 [2024-07-15 18:30:04.511617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.044 [2024-07-15 18:30:04.511635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.044 [2024-07-15 18:30:04.520202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.044 [2024-07-15 18:30:04.520220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.044 [2024-07-15 18:30:04.529754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.044 [2024-07-15 18:30:04.529772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.044 [2024-07-15 18:30:04.538770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.044 [2024-07-15 18:30:04.538788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.044 [2024-07-15 18:30:04.547645] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.044 [2024-07-15 18:30:04.547663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.044 [2024-07-15 18:30:04.557401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.044 [2024-07-15 18:30:04.557419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.044 [2024-07-15 18:30:04.566651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.044 [2024-07-15 18:30:04.566668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.044 [2024-07-15 18:30:04.575992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.044 [2024-07-15 18:30:04.576009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.044 [2024-07-15 18:30:04.584475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.044 [2024-07-15 18:30:04.584496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.044 [2024-07-15 18:30:04.593022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.044 [2024-07-15 18:30:04.593040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.303 [2024-07-15 18:30:04.601666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.303 [2024-07-15 18:30:04.601683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.303 [2024-07-15 18:30:04.610649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.303 [2024-07-15 18:30:04.610667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.303 [2024-07-15 18:30:04.619909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.303 [2024-07-15 18:30:04.619928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.303 [2024-07-15 18:30:04.629491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.303 [2024-07-15 18:30:04.629509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.303 [2024-07-15 18:30:04.638256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.303 [2024-07-15 18:30:04.638274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.303 [2024-07-15 18:30:04.647901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.303 [2024-07-15 18:30:04.647918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.303 [2024-07-15 18:30:04.657069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.303 [2024-07-15 18:30:04.657087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.303 [2024-07-15 18:30:04.665606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.303 [2024-07-15 18:30:04.665624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.303 [2024-07-15 18:30:04.674587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.303 [2024-07-15 18:30:04.674605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.303 [2024-07-15 18:30:04.683632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.303 [2024-07-15 18:30:04.683650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.303 [2024-07-15 18:30:04.692640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.303 [2024-07-15 18:30:04.692658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.303 [2024-07-15 18:30:04.701871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.303 [2024-07-15 18:30:04.701889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.303 [2024-07-15 18:30:04.711063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.303 [2024-07-15 18:30:04.711080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.303 [2024-07-15 18:30:04.720053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.303 [2024-07-15 18:30:04.720071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.303 [2024-07-15 18:30:04.729216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.303 [2024-07-15 18:30:04.729233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.303 [2024-07-15 18:30:04.738724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.303 [2024-07-15 18:30:04.738742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.303 [2024-07-15 18:30:04.747786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.303 [2024-07-15 18:30:04.747804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.303 [2024-07-15 18:30:04.756808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.303 [2024-07-15 18:30:04.756829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.304 [2024-07-15 18:30:04.765936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.304 [2024-07-15 18:30:04.765954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.304 [2024-07-15 18:30:04.775446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.304 [2024-07-15 18:30:04.775464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.304 [2024-07-15 18:30:04.783875] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.304 [2024-07-15 18:30:04.783893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.304 [2024-07-15 18:30:04.792738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.304 [2024-07-15 18:30:04.792755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.304 [2024-07-15 18:30:04.801556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.304 [2024-07-15 18:30:04.801574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.304 [2024-07-15 18:30:04.810672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.304 [2024-07-15 18:30:04.810690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.304 [2024-07-15 18:30:04.819992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.304 [2024-07-15 18:30:04.820010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.304 [2024-07-15 18:30:04.829056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.304 [2024-07-15 18:30:04.829075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.304 [2024-07-15 18:30:04.838017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.304 [2024-07-15 18:30:04.838035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.304 [2024-07-15 18:30:04.847366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.304 [2024-07-15 18:30:04.847384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.304 [2024-07-15 18:30:04.856521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.304 [2024-07-15 18:30:04.856539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.562 [2024-07-15 18:30:04.865464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.562 [2024-07-15 18:30:04.865482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.562 [2024-07-15 18:30:04.875004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.562 [2024-07-15 18:30:04.875022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.562 [2024-07-15 18:30:04.883601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.562 [2024-07-15 18:30:04.883618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.562 [2024-07-15 18:30:04.892624] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.562 [2024-07-15 18:30:04.892641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.562 [2024-07-15 18:30:04.901592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.562 [2024-07-15 18:30:04.901610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.562 [2024-07-15 18:30:04.910654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.562 [2024-07-15 18:30:04.910672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.562 [2024-07-15 18:30:04.919728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.562 [2024-07-15 18:30:04.919745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.562 [2024-07-15 18:30:04.928943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.562 [2024-07-15 18:30:04.928960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.562 [2024-07-15 18:30:04.938274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.562 [2024-07-15 18:30:04.938292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.562 [2024-07-15 18:30:04.947447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.562 [2024-07-15 18:30:04.947465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.562 [2024-07-15 18:30:04.956439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.562 [2024-07-15 18:30:04.956457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.562 [2024-07-15 18:30:04.964900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.562 [2024-07-15 18:30:04.964920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.562 [2024-07-15 18:30:04.974001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.562 [2024-07-15 18:30:04.974021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.562 [2024-07-15 18:30:04.982995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.562 [2024-07-15 18:30:04.983013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.562 [2024-07-15 18:30:04.992275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.562 [2024-07-15 18:30:04.992294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.562 [2024-07-15 18:30:05.001558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.562 [2024-07-15 18:30:05.001577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.562 [2024-07-15 18:30:05.010577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.562 [2024-07-15 18:30:05.010596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.562 [2024-07-15 18:30:05.019587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.562 [2024-07-15 18:30:05.019605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.562 [2024-07-15 18:30:05.029188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.562 [2024-07-15 18:30:05.029207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.562 [2024-07-15 18:30:05.037739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.562 [2024-07-15 18:30:05.037759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.562 [2024-07-15 18:30:05.047173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.562 [2024-07-15 18:30:05.047192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.562 [2024-07-15 18:30:05.056069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.562 [2024-07-15 18:30:05.056088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.562 [2024-07-15 18:30:05.065326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.562 [2024-07-15 18:30:05.065352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.562 [2024-07-15 18:30:05.074354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.562 [2024-07-15 18:30:05.074372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.562 [2024-07-15 18:30:05.083354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.562 [2024-07-15 18:30:05.083373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.562 [2024-07-15 18:30:05.091942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.562 [2024-07-15 18:30:05.091960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.562 [2024-07-15 18:30:05.101014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.562 [2024-07-15 18:30:05.101032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.562 [2024-07-15 18:30:05.110462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.562 [2024-07-15 18:30:05.110481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.820 [2024-07-15 18:30:05.119552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.820 [2024-07-15 18:30:05.119571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.820 [2024-07-15 18:30:05.128721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.820 [2024-07-15 18:30:05.128739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.820 [2024-07-15 18:30:05.137026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.820 [2024-07-15 18:30:05.137043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.820 [2024-07-15 18:30:05.145422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.820 [2024-07-15 18:30:05.145440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.820 [2024-07-15 18:30:05.154399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.820 [2024-07-15 18:30:05.154417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.820 [2024-07-15 18:30:05.162972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.820 [2024-07-15 18:30:05.162989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.820 [2024-07-15 18:30:05.171864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.820 [2024-07-15 18:30:05.171882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.820 [2024-07-15 18:30:05.180979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.820 [2024-07-15 18:30:05.180997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.820 [2024-07-15 18:30:05.190154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.820 [2024-07-15 18:30:05.190172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.820 [2024-07-15 18:30:05.199248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.820 [2024-07-15 18:30:05.199265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.820 [2024-07-15 18:30:05.207925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.820 [2024-07-15 18:30:05.207943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.820 [2024-07-15 18:30:05.217654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.820 [2024-07-15 18:30:05.217672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.820 [2024-07-15 18:30:05.226838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.820 [2024-07-15 18:30:05.226857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.820 [2024-07-15 18:30:05.235617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.820 [2024-07-15 18:30:05.235636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.820 [2024-07-15 18:30:05.244546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.820 [2024-07-15 18:30:05.244565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.820 [2024-07-15 18:30:05.254168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.820 [2024-07-15 18:30:05.254186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.820 [2024-07-15 18:30:05.263224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.820 [2024-07-15 18:30:05.263242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.820 [2024-07-15 18:30:05.272439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.820 [2024-07-15 18:30:05.272458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.820 [2024-07-15 18:30:05.280917] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.820 [2024-07-15 18:30:05.280935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.820 [2024-07-15 18:30:05.289993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.820 [2024-07-15 18:30:05.290012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.820 [2024-07-15 18:30:05.298950] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.820 [2024-07-15 18:30:05.298968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.820 [2024-07-15 18:30:05.307937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.820 [2024-07-15 18:30:05.307956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.820 [2024-07-15 18:30:05.316991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.820 [2024-07-15 18:30:05.317009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.820 [2024-07-15 18:30:05.326013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.820 [2024-07-15 18:30:05.326032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.820 [2024-07-15 18:30:05.335499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.820 [2024-07-15 18:30:05.335517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.820 [2024-07-15 18:30:05.344081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.820 [2024-07-15 18:30:05.344100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.820 [2024-07-15 18:30:05.353300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.820 [2024-07-15 18:30:05.353318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.820 [2024-07-15 18:30:05.362445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.820 [2024-07-15 18:30:05.362463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.820 [2024-07-15 18:30:05.371085] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.820 [2024-07-15 18:30:05.371102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.078 [2024-07-15 18:30:05.380826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.078 [2024-07-15 18:30:05.380844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.078 [2024-07-15 18:30:05.390053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.078 [2024-07-15 18:30:05.390071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.078 [2024-07-15 18:30:05.399356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.078 [2024-07-15 18:30:05.399374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.078 [2024-07-15 18:30:05.408608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.078 [2024-07-15 18:30:05.408627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.078 [2024-07-15 18:30:05.417114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.078 [2024-07-15 18:30:05.417133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.078 [2024-07-15 18:30:05.426045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.078 [2024-07-15 18:30:05.426063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.078 [2024-07-15 18:30:05.435303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.078 [2024-07-15 18:30:05.435321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.078 [2024-07-15 18:30:05.444304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.078 [2024-07-15 18:30:05.444322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.078 [2024-07-15 18:30:05.453377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.078 [2024-07-15 18:30:05.453395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.078 [2024-07-15 18:30:05.462934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.078 [2024-07-15 18:30:05.462952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.078 [2024-07-15 18:30:05.471778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.078 [2024-07-15 18:30:05.471796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.078 [2024-07-15 18:30:05.480860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.078 [2024-07-15 18:30:05.480879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.078 [2024-07-15 18:30:05.489765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.078 [2024-07-15 18:30:05.489783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.078 [2024-07-15 18:30:05.498873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.078 [2024-07-15 18:30:05.498891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.078 [2024-07-15 18:30:05.507959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.078 [2024-07-15 18:30:05.507977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.078 [2024-07-15 18:30:05.517374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.078 [2024-07-15 18:30:05.517392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.078 [2024-07-15 18:30:05.525718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.078 [2024-07-15 18:30:05.525736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.078 [2024-07-15 18:30:05.535233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.078 [2024-07-15 18:30:05.535251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.079 [2024-07-15 18:30:05.544382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.079 [2024-07-15 18:30:05.544400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.079 [2024-07-15 18:30:05.552957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.079 [2024-07-15 18:30:05.552975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.079 [2024-07-15 18:30:05.562672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.079 [2024-07-15 18:30:05.562691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.079 [2024-07-15 18:30:05.571772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.079 [2024-07-15 18:30:05.571790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.079 [2024-07-15 18:30:05.581292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.079 [2024-07-15 18:30:05.581310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.079 [2024-07-15 18:30:05.590296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.079 [2024-07-15 18:30:05.590313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.079 [2024-07-15 18:30:05.599150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.079 [2024-07-15 18:30:05.599167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.079 [2024-07-15 18:30:05.608110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.079 [2024-07-15 18:30:05.608131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.079 [2024-07-15 18:30:05.617148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.079 [2024-07-15 18:30:05.617166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.079 [2024-07-15 18:30:05.626051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.079 [2024-07-15 18:30:05.626069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.079 [2024-07-15 18:30:05.635082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.079 [2024-07-15 18:30:05.635100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.337 [2024-07-15 18:30:05.643751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.337 [2024-07-15 18:30:05.643769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.337 [2024-07-15 18:30:05.652655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.337 [2024-07-15 18:30:05.652672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.337 [2024-07-15 18:30:05.662136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.337 [2024-07-15 18:30:05.662153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.337 [2024-07-15 18:30:05.671334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.337 [2024-07-15 18:30:05.671359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.337 [2024-07-15 18:30:05.680996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.337 [2024-07-15 18:30:05.681014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.337 [2024-07-15 18:30:05.690136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.337 [2024-07-15 18:30:05.690156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.337 [2024-07-15 18:30:05.699815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.337 [2024-07-15 18:30:05.699833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.337 [2024-07-15 18:30:05.708349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.337 [2024-07-15 18:30:05.708367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.337 [2024-07-15 18:30:05.716887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.337 [2024-07-15 18:30:05.716905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.337 [2024-07-15 18:30:05.725969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.337 [2024-07-15 18:30:05.725987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.337 [2024-07-15 18:30:05.735178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.337 [2024-07-15 18:30:05.735196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.337 [2024-07-15 18:30:05.744366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.337 [2024-07-15 18:30:05.744385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.337 [2024-07-15 18:30:05.753371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.337 [2024-07-15 18:30:05.753389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.337 [2024-07-15 18:30:05.761739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.337 [2024-07-15 18:30:05.761758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.337 [2024-07-15 18:30:05.770294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.337 [2024-07-15 18:30:05.770311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.337 [2024-07-15 18:30:05.779210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.337 [2024-07-15 18:30:05.779232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.337 [2024-07-15 18:30:05.787826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.337 [2024-07-15 18:30:05.787844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.337 [2024-07-15 18:30:05.796393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.337 [2024-07-15 18:30:05.796411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.337 [2024-07-15 18:30:05.805408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.337 [2024-07-15 18:30:05.805426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.337 [2024-07-15 18:30:05.813846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.337 [2024-07-15 18:30:05.813863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.337 [2024-07-15 18:30:05.822913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.337 [2024-07-15 18:30:05.822931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.337 [2024-07-15 18:30:05.832186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.337 [2024-07-15 18:30:05.832204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.337 [2024-07-15 18:30:05.841402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.337 [2024-07-15 18:30:05.841419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.337 [2024-07-15 18:30:05.850602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.337 [2024-07-15 18:30:05.850621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.337 [2024-07-15 18:30:05.859825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.337 [2024-07-15 18:30:05.859843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.337 [2024-07-15 18:30:05.869101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.337 [2024-07-15 18:30:05.869119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.337 [2024-07-15 18:30:05.878233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.338 [2024-07-15 18:30:05.878251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.338 [2024-07-15 18:30:05.887778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.338 [2024-07-15 18:30:05.887796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.596 [2024-07-15 18:30:05.896566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.596 [2024-07-15 18:30:05.896585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.596 [2024-07-15 18:30:05.905767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.596 [2024-07-15 18:30:05.905785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.596 [2024-07-15 18:30:05.914799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.596 [2024-07-15 18:30:05.914817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.596 [2024-07-15 18:30:05.924040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.596 [2024-07-15 18:30:05.924059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.596 [2024-07-15 18:30:05.933302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.596 [2024-07-15 18:30:05.933320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.596 [2024-07-15 18:30:05.942421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.596 [2024-07-15 18:30:05.942439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.596 [2024-07-15 18:30:05.951483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.596 [2024-07-15 18:30:05.951505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.596 [2024-07-15 18:30:05.959997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.596 [2024-07-15 18:30:05.960015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.596 [2024-07-15 18:30:05.969065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.596 [2024-07-15 18:30:05.969083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.596 [2024-07-15 18:30:05.978356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.596 [2024-07-15 18:30:05.978375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.596 [2024-07-15 18:30:05.987433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.596 [2024-07-15 18:30:05.987453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.596 [2024-07-15 18:30:05.996089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.596 [2024-07-15 18:30:05.996108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.596 [2024-07-15 18:30:06.005196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.596 [2024-07-15 18:30:06.005215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.596 [2024-07-15 18:30:06.013626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.596 [2024-07-15 18:30:06.013646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.596 [2024-07-15 18:30:06.023057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.596 [2024-07-15 18:30:06.023076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.596 [2024-07-15 18:30:06.032293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.596 [2024-07-15 18:30:06.032312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.596 [2024-07-15 18:30:06.041301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.596 [2024-07-15 18:30:06.041320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.596 [2024-07-15 18:30:06.050444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.596 [2024-07-15 18:30:06.050463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.596 [2024-07-15 18:30:06.059475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.596 [2024-07-15 18:30:06.059494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.596 [2024-07-15 18:30:06.068089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.596 [2024-07-15 18:30:06.068107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.596 [2024-07-15 18:30:06.077078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.596 [2024-07-15 18:30:06.077095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.596 [2024-07-15 18:30:06.086299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.596 [2024-07-15 18:30:06.086317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.596 [2024-07-15 18:30:06.095488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.596 [2024-07-15 18:30:06.095505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.596 [2024-07-15 18:30:06.104412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.596 [2024-07-15 18:30:06.104430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.596 [2024-07-15 18:30:06.113923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.596 [2024-07-15 18:30:06.113941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.596 [2024-07-15 18:30:06.122370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.596 [2024-07-15 18:30:06.122392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.596 [2024-07-15 18:30:06.131263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.596 [2024-07-15 18:30:06.131281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.596 [2024-07-15 18:30:06.140035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.596 [2024-07-15 18:30:06.140053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.596 [2024-07-15 18:30:06.149103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.596 [2024-07-15 18:30:06.149121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.855 [2024-07-15 18:30:06.158762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.855 [2024-07-15 18:30:06.158780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.855 [2024-07-15 18:30:06.167927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.855 [2024-07-15 18:30:06.167945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.855 [2024-07-15 18:30:06.177112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.855 [2024-07-15 18:30:06.177129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.855 [2024-07-15 18:30:06.186455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.855 [2024-07-15 18:30:06.186473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.855 [2024-07-15 18:30:06.195614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.855 [2024-07-15 18:30:06.195631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.855 [2024-07-15 18:30:06.205504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.855 [2024-07-15 18:30:06.205521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.855 [2024-07-15 18:30:06.213854] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.855 [2024-07-15 18:30:06.213872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.855 [2024-07-15 18:30:06.223259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.855 [2024-07-15 18:30:06.223277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.855 [2024-07-15 18:30:06.231620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.855 [2024-07-15 18:30:06.231637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.855 [2024-07-15 18:30:06.240787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.855 [2024-07-15 18:30:06.240805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.855 [2024-07-15 18:30:06.250387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.855 [2024-07-15 18:30:06.250404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.855 [2024-07-15 18:30:06.258876] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.855 [2024-07-15 18:30:06.258894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.855 [2024-07-15 18:30:06.267926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.855 [2024-07-15 18:30:06.267944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.855 [2024-07-15 18:30:06.276942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.855 [2024-07-15 18:30:06.276960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.855 [2024-07-15 18:30:06.286248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.855 [2024-07-15 18:30:06.286266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.855 [2024-07-15 18:30:06.295165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.855 [2024-07-15 18:30:06.295183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.855 [2024-07-15 18:30:06.304120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.855 [2024-07-15 18:30:06.304138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.855 [2024-07-15 18:30:06.313191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.855 [2024-07-15 18:30:06.313209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.855 [2024-07-15 18:30:06.322287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.856 [2024-07-15 18:30:06.322304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.856 [2024-07-15 18:30:06.331219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.856 [2024-07-15 18:30:06.331238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.856 [2024-07-15 18:30:06.340594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.856 [2024-07-15 18:30:06.340612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.856 [2024-07-15 18:30:06.349565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.856 [2024-07-15 18:30:06.349582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.856 [2024-07-15 18:30:06.358575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.856 [2024-07-15 18:30:06.358592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.856 [2024-07-15 18:30:06.367522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.856 [2024-07-15 18:30:06.367540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.856 [2024-07-15 18:30:06.376412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.856 [2024-07-15 18:30:06.376430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.856 [2024-07-15 18:30:06.385001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.856 [2024-07-15 18:30:06.385019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.856 [2024-07-15 18:30:06.394641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.856 [2024-07-15 18:30:06.394659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.856 [2024-07-15 18:30:06.403680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.856 [2024-07-15 18:30:06.403697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.114 [2024-07-15 18:30:06.412772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.114 [2024-07-15 18:30:06.412790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.114 [2024-07-15 18:30:06.421864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.114 [2024-07-15 18:30:06.421883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.114 [2024-07-15 18:30:06.431466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.114 [2024-07-15 18:30:06.431485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.114 [2024-07-15 18:30:06.440473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.114 [2024-07-15 18:30:06.440491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.114 [2024-07-15 18:30:06.449583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.114 [2024-07-15 18:30:06.449602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.114 [2024-07-15 18:30:06.458425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.114 [2024-07-15 18:30:06.458444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.114 [2024-07-15 18:30:06.467488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.114 [2024-07-15 18:30:06.467507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.114 [2024-07-15 18:30:06.476598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.114 [2024-07-15 18:30:06.476616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.114 [2024-07-15 18:30:06.485676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.114 [2024-07-15 18:30:06.485694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.114 [2024-07-15 18:30:06.494721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.114 [2024-07-15 18:30:06.494739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.114 [2024-07-15 18:30:06.503109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.114 [2024-07-15 18:30:06.503127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.114 [2024-07-15 18:30:06.512172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.114 [2024-07-15 18:30:06.512190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.114 [2024-07-15 18:30:06.521210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.114 [2024-07-15 18:30:06.521228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.114 [2024-07-15 18:30:06.530062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.114 [2024-07-15 18:30:06.530080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.114 [2024-07-15 18:30:06.538631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.114 [2024-07-15 18:30:06.538652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.114 [2024-07-15 18:30:06.547762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.114 [2024-07-15 18:30:06.547780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.114 [2024-07-15 18:30:06.556821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.114 [2024-07-15 18:30:06.556839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.114 [2024-07-15 18:30:06.566418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.114 [2024-07-15 18:30:06.566436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.114 [2024-07-15 18:30:06.575275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.114 [2024-07-15 18:30:06.575293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.115 [2024-07-15 18:30:06.584266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.115 [2024-07-15 18:30:06.584295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.115 [2024-07-15 18:30:06.592720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.115 [2024-07-15 18:30:06.592738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.115 [2024-07-15 18:30:06.601252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.115 [2024-07-15 18:30:06.601270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.115 [2024-07-15 18:30:06.610901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.115 [2024-07-15 18:30:06.610919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.115 [2024-07-15 18:30:06.619423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.115 [2024-07-15 18:30:06.619441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.115 [2024-07-15 18:30:06.628583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.115 [2024-07-15 18:30:06.628601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.115 [2024-07-15 18:30:06.638006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.115 [2024-07-15 18:30:06.638025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.115 [2024-07-15 18:30:06.647013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.115 [2024-07-15 18:30:06.647032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.115 [2024-07-15 18:30:06.655480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.115 [2024-07-15 18:30:06.655498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.115 [2024-07-15 18:30:06.664369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.115 [2024-07-15 18:30:06.664389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.373 [2024-07-15 18:30:06.681663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.373 [2024-07-15 18:30:06.681682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.373 [2024-07-15 18:30:06.690236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.373 [2024-07-15 18:30:06.690254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.373 [2024-07-15 18:30:06.699359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.373 [2024-07-15 18:30:06.699377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.373 [2024-07-15 18:30:06.708238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.373 [2024-07-15 18:30:06.708257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.373 [2024-07-15 18:30:06.717246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.373 [2024-07-15 18:30:06.717264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.373 [2024-07-15 18:30:06.726208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.373 [2024-07-15 18:30:06.726227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.373 [2024-07-15 18:30:06.735634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.373 [2024-07-15 18:30:06.735653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.373 [2024-07-15 18:30:06.744302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.373 [2024-07-15 18:30:06.744321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.373 [2024-07-15 18:30:06.753834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.373 [2024-07-15 18:30:06.753852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.373 [2024-07-15 18:30:06.763018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.374 [2024-07-15 18:30:06.763037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.374 [2024-07-15 18:30:06.772250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.374 [2024-07-15 18:30:06.772269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.374 [2024-07-15 18:30:06.781516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.374 [2024-07-15 18:30:06.781535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.374 [2024-07-15 18:30:06.789961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.374 [2024-07-15 18:30:06.789979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.374 [2024-07-15 18:30:06.799004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.374 [2024-07-15 18:30:06.799022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.374 [2024-07-15 18:30:06.808111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.374 [2024-07-15 18:30:06.808129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.374 [2024-07-15 18:30:06.817304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.374 [2024-07-15 18:30:06.817322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.374 [2024-07-15 18:30:06.825697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.374 [2024-07-15 18:30:06.825714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.374 [2024-07-15 18:30:06.834301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.374 [2024-07-15 18:30:06.834319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.374 [2024-07-15 18:30:06.843428] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.374 [2024-07-15 18:30:06.843446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.374 [2024-07-15 18:30:06.852291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.374 [2024-07-15 18:30:06.852308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.374 [2024-07-15 18:30:06.860766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.374 [2024-07-15 18:30:06.860783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.374 [2024-07-15 18:30:06.870403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.374 [2024-07-15 18:30:06.870420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.374 [2024-07-15 18:30:06.878858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.374 [2024-07-15 18:30:06.878875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.374 [2024-07-15 18:30:06.887200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.374 [2024-07-15 18:30:06.887217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.374 [2024-07-15 18:30:06.896391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.374 [2024-07-15 18:30:06.896409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.374 [2024-07-15 18:30:06.905498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.374 [2024-07-15 18:30:06.905517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.374 [2024-07-15 18:30:06.915063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.374 [2024-07-15 18:30:06.915081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.374 [2024-07-15 18:30:06.924205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.374 [2024-07-15 18:30:06.924222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.632 [2024-07-15 18:30:06.933318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.632 [2024-07-15 18:30:06.933336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.632 [2024-07-15 18:30:06.942372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.632 [2024-07-15 18:30:06.942390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.632 [2024-07-15 18:30:06.951305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.632 [2024-07-15 18:30:06.951322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.632 [2024-07-15 18:30:06.960209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.632 [2024-07-15 18:30:06.960227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.632 [2024-07-15 18:30:06.969269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.632 [2024-07-15 18:30:06.969286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.632 [2024-07-15 18:30:06.978210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.632 [2024-07-15 18:30:06.978232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.632 [2024-07-15 18:30:06.987155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.632 [2024-07-15 18:30:06.987174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.632 [2024-07-15 18:30:06.996706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.632 [2024-07-15 18:30:06.996725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.632 [2024-07-15 18:30:07.005820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.632 [2024-07-15 18:30:07.005839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.632 [2024-07-15 18:30:07.015313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.632 [2024-07-15 18:30:07.015332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.632 [2024-07-15 18:30:07.024322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.632 [2024-07-15 18:30:07.024345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.632 [2024-07-15 18:30:07.033268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.632 [2024-07-15 18:30:07.033291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.632 [2024-07-15 18:30:07.042153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.632 [2024-07-15 18:30:07.042172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.632 [2024-07-15 18:30:07.051313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.632 [2024-07-15 18:30:07.051333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.632 [2024-07-15 18:30:07.060889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.632 [2024-07-15 18:30:07.060907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.632 [2024-07-15 18:30:07.070269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.632 [2024-07-15 18:30:07.070287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.632 [2024-07-15 18:30:07.079435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.632 [2024-07-15 18:30:07.079453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.633 [2024-07-15 18:30:07.088000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.633 [2024-07-15 18:30:07.088017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.633 [2024-07-15 18:30:07.097015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.633 [2024-07-15 18:30:07.097032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.633 [2024-07-15 18:30:07.106117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.633 [2024-07-15 18:30:07.106135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.633 [2024-07-15 18:30:07.115619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.633 [2024-07-15 18:30:07.115637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.633 [2024-07-15 18:30:07.124866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.633 [2024-07-15 18:30:07.124884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.633 [2024-07-15 18:30:07.133966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.633 [2024-07-15 18:30:07.133984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.633 [2024-07-15 18:30:07.142558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.633 [2024-07-15 18:30:07.142577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.633 [2024-07-15 18:30:07.151233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.633 [2024-07-15 18:30:07.151255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.633 [2024-07-15 18:30:07.161074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.633 [2024-07-15 18:30:07.161093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.633 [2024-07-15 18:30:07.170419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.633 [2024-07-15 18:30:07.170437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.633 [2024-07-15 18:30:07.180052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.633 [2024-07-15 18:30:07.180070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.633 [2024-07-15 18:30:07.189408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.633 [2024-07-15 18:30:07.189426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.892 [2024-07-15 18:30:07.198518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.892 [2024-07-15 18:30:07.198536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.892 [2024-07-15 18:30:07.207724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.892 [2024-07-15 18:30:07.207743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.892 [2024-07-15 18:30:07.216861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.892 [2024-07-15 18:30:07.216878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.892 [2024-07-15 18:30:07.226012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.892 [2024-07-15 18:30:07.226030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.892 [2024-07-15 18:30:07.235050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.892 [2024-07-15 18:30:07.235067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.892 [2024-07-15 18:30:07.244003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.892 [2024-07-15 18:30:07.244021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.892 [2024-07-15 18:30:07.253017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.892 [2024-07-15 18:30:07.253035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.892 [2024-07-15 18:30:07.262124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.892 [2024-07-15 18:30:07.262142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.892 [2024-07-15 18:30:07.271113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.892 [2024-07-15 18:30:07.271130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.892 [2024-07-15 18:30:07.280722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.892 [2024-07-15 18:30:07.280740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.892 [2024-07-15 18:30:07.289300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.892 [2024-07-15 18:30:07.289317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.892 [2024-07-15 18:30:07.298583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.892 [2024-07-15 18:30:07.298601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.892 [2024-07-15 18:30:07.307532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.892 [2024-07-15 18:30:07.307550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.892 [2024-07-15 18:30:07.316593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.892 [2024-07-15 18:30:07.316611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.892 [2024-07-15 18:30:07.325710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.892 [2024-07-15 18:30:07.325732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.892 [2024-07-15 18:30:07.334581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.892 [2024-07-15 18:30:07.334599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.892 [2024-07-15 18:30:07.343766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.892 [2024-07-15 18:30:07.343784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.892 [2024-07-15 18:30:07.353401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.892 [2024-07-15 18:30:07.353419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.892 [2024-07-15 18:30:07.363177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.892 [2024-07-15 18:30:07.363195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.892 [2024-07-15 18:30:07.372296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.892 [2024-07-15 18:30:07.372313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.892 [2024-07-15 18:30:07.380852] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.892 [2024-07-15 18:30:07.380870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.892 [2024-07-15 18:30:07.390040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.892 [2024-07-15 18:30:07.390057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.892 [2024-07-15 18:30:07.399054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.892 [2024-07-15 18:30:07.399072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.892 [2024-07-15 18:30:07.408428] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.892 [2024-07-15 18:30:07.408446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.892 [2024-07-15 18:30:07.417038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.892 [2024-07-15 18:30:07.417056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.892 [2024-07-15 18:30:07.426128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.892 [2024-07-15 18:30:07.426145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.892 [2024-07-15 18:30:07.435245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.892 [2024-07-15 18:30:07.435264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.892 [2024-07-15 18:30:07.444311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.892 [2024-07-15 18:30:07.444329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.151 [2024-07-15 18:30:07.453449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.151 [2024-07-15 18:30:07.453467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.151 [2024-07-15 18:30:07.462422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.151 [2024-07-15 18:30:07.462439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.151 [2024-07-15 18:30:07.471395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.151 [2024-07-15 18:30:07.471412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.151 [2024-07-15 18:30:07.479878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.151 [2024-07-15 18:30:07.479896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.151 [2024-07-15 18:30:07.489111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.151 [2024-07-15 18:30:07.489128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.151 [2024-07-15 18:30:07.497713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.151 [2024-07-15 18:30:07.497734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.151 [2024-07-15 18:30:07.506656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.151 [2024-07-15 18:30:07.506674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.151 [2024-07-15 18:30:07.515693] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.151 [2024-07-15 18:30:07.515711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.151 [2024-07-15 18:30:07.524895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.151 [2024-07-15 18:30:07.524913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.151 [2024-07-15 18:30:07.533953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.151 [2024-07-15 18:30:07.533971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.151 [2024-07-15 18:30:07.542467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.151 [2024-07-15 18:30:07.542486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.151 [2024-07-15 18:30:07.551417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.151 [2024-07-15 18:30:07.551435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.151 [2024-07-15 18:30:07.561039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.151 [2024-07-15 18:30:07.561058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.151 [2024-07-15 18:30:07.570015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.151 [2024-07-15 18:30:07.570033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.151 [2024-07-15 18:30:07.579081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.151 [2024-07-15 18:30:07.579098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.151 [2024-07-15 18:30:07.587962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.151 [2024-07-15 18:30:07.587979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.151 [2024-07-15 18:30:07.596557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.151 [2024-07-15 18:30:07.596575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.151 [2024-07-15 18:30:07.605658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.151 [2024-07-15 18:30:07.605675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.151 [2024-07-15 18:30:07.614809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.151 [2024-07-15 18:30:07.614828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.151 [2024-07-15 18:30:07.624298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.151 [2024-07-15 18:30:07.624315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.151 [2024-07-15 18:30:07.632829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.151 [2024-07-15 18:30:07.632847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.151 [2024-07-15 18:30:07.641889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.151 [2024-07-15 18:30:07.641907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.151 [2024-07-15 18:30:07.651055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.151 [2024-07-15 18:30:07.651073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.151 [2024-07-15 18:30:07.659738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.151 [2024-07-15 18:30:07.659756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.151 [2024-07-15 18:30:07.668589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.151 [2024-07-15 18:30:07.668606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.151 [2024-07-15 18:30:07.677510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.151 [2024-07-15 18:30:07.677527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.152 [2024-07-15 18:30:07.686742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.152 [2024-07-15 18:30:07.686759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.152 [2024-07-15 18:30:07.695169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.152 [2024-07-15 18:30:07.695187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.152 [2024-07-15 18:30:07.704269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.152 [2024-07-15 18:30:07.704287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.409 [2024-07-15 18:30:07.713491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.409 [2024-07-15 18:30:07.713509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.409 [2024-07-15 18:30:07.722472] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.409 [2024-07-15 18:30:07.722490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.409 [2024-07-15 18:30:07.731542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.409 [2024-07-15 18:30:07.731559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.409 [2024-07-15 18:30:07.740579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.409 [2024-07-15 18:30:07.740596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.409 [2024-07-15 18:30:07.749920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.409 [2024-07-15 18:30:07.749938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.409 [2024-07-15 18:30:07.758885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.409 [2024-07-15 18:30:07.758904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.409 [2024-07-15 18:30:07.768058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.409 [2024-07-15 18:30:07.768077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.409 [2024-07-15 18:30:07.777749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.409 [2024-07-15 18:30:07.777767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.409 [2024-07-15 18:30:07.787090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.409 [2024-07-15 18:30:07.787109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.409 [2024-07-15 18:30:07.796716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.409 [2024-07-15 18:30:07.796734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.409 [2024-07-15 18:30:07.805352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.409 [2024-07-15 18:30:07.805370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.409 [2024-07-15 18:30:07.814493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.409 [2024-07-15 18:30:07.814511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.409 [2024-07-15 18:30:07.823520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.409 [2024-07-15 18:30:07.823539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.409 [2024-07-15 18:30:07.832414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.409 [2024-07-15 18:30:07.832432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.409 [2024-07-15 18:30:07.842432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.409 [2024-07-15 18:30:07.842450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.409 [2024-07-15 18:30:07.851737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.409 [2024-07-15 18:30:07.851756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.410 [2024-07-15 18:30:07.861074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.410 [2024-07-15 18:30:07.861093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.410 [2024-07-15 18:30:07.869903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.410 [2024-07-15 18:30:07.869921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.410 [2024-07-15 18:30:07.879238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.410 [2024-07-15 18:30:07.879256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.410 [2024-07-15 18:30:07.888424] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.410 [2024-07-15 18:30:07.888443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.410 [2024-07-15 18:30:07.897850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.410 [2024-07-15 18:30:07.897869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.410 [2024-07-15 18:30:07.907239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.410 [2024-07-15 18:30:07.907259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.410 [2024-07-15 18:30:07.916869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.410 [2024-07-15 18:30:07.916888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.410 [2024-07-15 18:30:07.926237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.410 [2024-07-15 18:30:07.926255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.410 [2024-07-15 18:30:07.935312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.410 [2024-07-15 18:30:07.935331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.410 [2024-07-15 18:30:07.944380] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.410 [2024-07-15 18:30:07.944398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.410 [2024-07-15 18:30:07.953846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.410 [2024-07-15 18:30:07.953864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.410 [2024-07-15 18:30:07.963040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.410 [2024-07-15 18:30:07.963058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.667 [2024-07-15 18:30:07.972172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.667 [2024-07-15 18:30:07.972190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.667 [2024-07-15 18:30:07.978796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.667 [2024-07-15 18:30:07.978814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.667 00:15:22.667 Latency(us) 00:15:22.667 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:22.667 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:22.667 Nvme1n1 : 5.01 17172.35 134.16 0.00 0.00 7447.47 3229.99 18974.23 00:15:22.667 =================================================================================================================== 00:15:22.667 Total : 17172.35 134.16 0.00 0.00 7447.47 3229.99 18974.23 00:15:22.667 [2024-07-15 18:30:07.986812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.667 [2024-07-15 18:30:07.986826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.667 [2024-07-15 18:30:07.994832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.667 [2024-07-15 18:30:07.994845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.667 [2024-07-15 18:30:08.002855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.667 [2024-07-15 18:30:08.002867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.667 [2024-07-15 18:30:08.010884] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.667 [2024-07-15 18:30:08.010898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.667 [2024-07-15 18:30:08.018898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.667 [2024-07-15 18:30:08.018909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.667 [2024-07-15 18:30:08.026921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.667 [2024-07-15 18:30:08.026932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.667 [2024-07-15 18:30:08.034941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.667 [2024-07-15 18:30:08.034953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.667 [2024-07-15 18:30:08.042964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.667 [2024-07-15 18:30:08.042981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.667 [2024-07-15 18:30:08.050988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.667 [2024-07-15 18:30:08.051002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.667 [2024-07-15 18:30:08.059008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.667 [2024-07-15 18:30:08.059022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.667 [2024-07-15 18:30:08.067031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.667 [2024-07-15 18:30:08.067044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.667 [2024-07-15 18:30:08.075050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.667 [2024-07-15 18:30:08.075063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.667 [2024-07-15 18:30:08.083071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.667 [2024-07-15 18:30:08.083083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.667 [2024-07-15 18:30:08.091090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.667 [2024-07-15 18:30:08.091100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.667 [2024-07-15 18:30:08.099113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.667 [2024-07-15 18:30:08.099124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.667 [2024-07-15 18:30:08.107136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.667 [2024-07-15 18:30:08.107148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.667 [2024-07-15 18:30:08.115157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.667 [2024-07-15 18:30:08.115169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.667 [2024-07-15 18:30:08.123174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.667 [2024-07-15 18:30:08.123184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.667 [2024-07-15 18:30:08.131197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.667 [2024-07-15 18:30:08.131207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.667 [2024-07-15 18:30:08.139219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.667 [2024-07-15 18:30:08.139231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.667 [2024-07-15 18:30:08.147243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.667 [2024-07-15 18:30:08.147256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.667 [2024-07-15 18:30:08.155261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.667 [2024-07-15 18:30:08.155271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.667 [2024-07-15 18:30:08.163284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.667 [2024-07-15 18:30:08.163294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.667 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3890769) - No such process 00:15:22.667 18:30:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3890769 00:15:22.667 18:30:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:22.667 18:30:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.667 18:30:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:22.667 18:30:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.668 18:30:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:22.668 18:30:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.668 18:30:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:22.668 delay0 00:15:22.668 18:30:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.668 18:30:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:22.668 18:30:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.668 18:30:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:22.668 18:30:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.668 18:30:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:22.925 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.925 [2024-07-15 18:30:08.339490] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:29.480 Initializing NVMe Controllers 00:15:29.480 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:29.480 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:29.480 Initialization complete. Launching workers. 00:15:29.480 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 756 00:15:29.480 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1043, failed to submit 33 00:15:29.480 success 881, unsuccess 162, failed 0 00:15:29.480 18:30:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:29.480 18:30:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:15:29.480 18:30:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:29.480 18:30:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:15:29.480 18:30:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:29.480 18:30:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:15:29.480 18:30:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:29.480 18:30:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:29.480 rmmod nvme_tcp 00:15:29.480 rmmod nvme_fabrics 00:15:29.480 rmmod nvme_keyring 00:15:29.480 18:30:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:29.480 18:30:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:15:29.481 18:30:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:15:29.481 18:30:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3888679 ']' 00:15:29.481 18:30:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3888679 00:15:29.481 18:30:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 3888679 ']' 00:15:29.481 18:30:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 3888679 00:15:29.481 18:30:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:15:29.481 18:30:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:29.481 18:30:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3888679 00:15:29.481 18:30:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:29.481 18:30:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:29.481 18:30:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3888679' 00:15:29.481 killing process with pid 3888679 00:15:29.481 18:30:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 3888679 00:15:29.481 18:30:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 3888679 00:15:29.481 18:30:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:29.481 18:30:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:29.481 18:30:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:29.481 18:30:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:29.481 18:30:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:29.481 18:30:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.481 18:30:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:29.481 18:30:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.018 18:30:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:32.018 00:15:32.018 real 0m31.748s 00:15:32.018 user 0m43.746s 00:15:32.018 sys 0m9.366s 00:15:32.018 18:30:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:32.018 18:30:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:32.018 ************************************ 00:15:32.018 END TEST nvmf_zcopy 00:15:32.018 ************************************ 00:15:32.018 18:30:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:32.018 18:30:16 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:32.018 18:30:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:32.018 18:30:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:32.018 18:30:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:32.018 ************************************ 00:15:32.018 START TEST nvmf_nmic 00:15:32.018 ************************************ 00:15:32.018 18:30:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:32.018 * Looking for test storage... 00:15:32.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:32.018 18:30:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:32.018 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:15:32.018 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:32.018 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:32.018 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:32.018 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:32.018 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:32.018 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:32.018 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:32.018 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:32.018 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:32.018 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:32.018 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:32.018 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:32.018 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:32.018 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:32.018 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:32.018 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:32.018 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:32.018 18:30:17 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:32.018 18:30:17 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:32.018 18:30:17 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:32.018 18:30:17 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.018 18:30:17 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.018 18:30:17 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.018 18:30:17 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:15:32.019 18:30:17 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.019 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:15:32.019 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:32.019 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:32.019 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:32.019 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:32.019 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:32.019 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:32.019 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:32.019 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:32.019 18:30:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:32.019 18:30:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:32.019 18:30:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:15:32.019 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:32.019 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:32.019 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:32.019 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:32.019 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:32.019 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.019 18:30:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:32.019 18:30:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.019 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:32.019 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:32.019 18:30:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:15:32.019 18:30:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:37.294 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:37.294 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:37.294 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:37.295 Found net devices under 0000:86:00.0: cvl_0_0 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:37.295 Found net devices under 0000:86:00.1: cvl_0_1 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:37.295 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:37.554 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:37.554 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:37.554 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:37.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:37.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:15:37.554 00:15:37.554 --- 10.0.0.2 ping statistics --- 00:15:37.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.554 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:15:37.554 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:37.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:37.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:15:37.554 00:15:37.554 --- 10.0.0.1 ping statistics --- 00:15:37.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.554 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:15:37.554 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:37.554 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:15:37.554 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:37.554 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:37.554 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:37.554 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:37.554 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:37.554 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:37.554 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:37.554 18:30:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:37.554 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:37.554 18:30:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:37.554 18:30:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:37.554 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3896506 00:15:37.554 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3896506 00:15:37.554 18:30:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:37.554 18:30:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 3896506 ']' 00:15:37.554 18:30:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.554 18:30:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:37.554 18:30:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.554 18:30:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:37.554 18:30:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:37.554 [2024-07-15 18:30:22.974502] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:15:37.554 [2024-07-15 18:30:22.974550] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.554 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.554 [2024-07-15 18:30:23.044736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:37.813 [2024-07-15 18:30:23.125071] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:37.813 [2024-07-15 18:30:23.125106] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:37.813 [2024-07-15 18:30:23.125114] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:37.813 [2024-07-15 18:30:23.125120] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:37.813 [2024-07-15 18:30:23.125124] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:37.813 [2024-07-15 18:30:23.125172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.813 [2024-07-15 18:30:23.125279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:37.813 [2024-07-15 18:30:23.125398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.813 [2024-07-15 18:30:23.125398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:38.379 18:30:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:38.379 18:30:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:15:38.379 18:30:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:38.379 18:30:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:38.379 18:30:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:38.380 [2024-07-15 18:30:23.811055] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:38.380 Malloc0 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:38.380 [2024-07-15 18:30:23.862368] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:38.380 test case1: single bdev can't be used in multiple subsystems 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:38.380 [2024-07-15 18:30:23.886292] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:38.380 [2024-07-15 18:30:23.886310] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:38.380 [2024-07-15 18:30:23.886317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.380 request: 00:15:38.380 { 00:15:38.380 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:38.380 "namespace": { 00:15:38.380 "bdev_name": "Malloc0", 00:15:38.380 "no_auto_visible": false 00:15:38.380 }, 00:15:38.380 "method": "nvmf_subsystem_add_ns", 00:15:38.380 "req_id": 1 00:15:38.380 } 00:15:38.380 Got JSON-RPC error response 00:15:38.380 response: 00:15:38.380 { 00:15:38.380 "code": -32602, 00:15:38.380 "message": "Invalid parameters" 00:15:38.380 } 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:38.380 Adding namespace failed - expected result. 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:38.380 test case2: host connect to nvmf target in multiple paths 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:38.380 [2024-07-15 18:30:23.898422] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.380 18:30:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:39.757 18:30:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:40.692 18:30:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:40.692 18:30:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:15:40.692 18:30:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:40.692 18:30:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:40.692 18:30:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:15:42.594 18:30:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:42.594 18:30:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:42.594 18:30:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:42.594 18:30:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:42.594 18:30:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:42.594 18:30:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:15:42.594 18:30:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:42.594 [global] 00:15:42.594 thread=1 00:15:42.594 invalidate=1 00:15:42.594 rw=write 00:15:42.594 time_based=1 00:15:42.594 runtime=1 00:15:42.594 ioengine=libaio 00:15:42.594 direct=1 00:15:42.594 bs=4096 00:15:42.594 iodepth=1 00:15:42.594 norandommap=0 00:15:42.594 numjobs=1 00:15:42.594 00:15:42.594 verify_dump=1 00:15:42.594 verify_backlog=512 00:15:42.594 verify_state_save=0 00:15:42.594 do_verify=1 00:15:42.594 verify=crc32c-intel 00:15:42.594 [job0] 00:15:42.594 filename=/dev/nvme0n1 00:15:42.852 Could not set queue depth (nvme0n1) 00:15:43.110 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:43.110 fio-3.35 00:15:43.110 Starting 1 thread 00:15:44.047 00:15:44.047 job0: (groupid=0, jobs=1): err= 0: pid=3897588: Mon Jul 15 18:30:29 2024 00:15:44.047 read: IOPS=21, BW=86.7KiB/s (88.8kB/s)(88.0KiB/1015msec) 00:15:44.047 slat (nsec): min=9786, max=24058, avg=21822.18, stdev=2762.71 00:15:44.047 clat (usec): min=40835, max=42419, avg=41056.43, stdev=333.78 00:15:44.047 lat (usec): min=40859, max=42441, avg=41078.26, stdev=333.87 00:15:44.047 clat percentiles (usec): 00:15:44.047 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:15:44.047 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:44.047 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:15:44.047 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:44.047 | 99.99th=[42206] 00:15:44.047 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:15:44.047 slat (usec): min=10, max=23954, avg=58.58, stdev=1058.13 00:15:44.047 clat (usec): min=123, max=306, avg=152.43, stdev=11.76 00:15:44.047 lat (usec): min=135, max=24261, avg=211.02, stdev=1065.01 00:15:44.047 clat percentiles (usec): 00:15:44.047 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 147], 00:15:44.047 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 153], 00:15:44.047 | 70.00th=[ 155], 80.00th=[ 157], 90.00th=[ 161], 95.00th=[ 163], 00:15:44.047 | 99.00th=[ 172], 99.50th=[ 227], 99.90th=[ 306], 99.95th=[ 306], 00:15:44.047 | 99.99th=[ 306] 00:15:44.047 bw ( KiB/s): min= 4087, max= 4087, per=100.00%, avg=4087.00, stdev= 0.00, samples=1 00:15:44.047 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:15:44.047 lat (usec) : 250=95.51%, 500=0.37% 00:15:44.047 lat (msec) : 50=4.12% 00:15:44.047 cpu : usr=0.59%, sys=0.69%, ctx=537, majf=0, minf=2 00:15:44.047 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:44.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:44.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:44.047 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:44.047 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:44.047 00:15:44.047 Run status group 0 (all jobs): 00:15:44.047 READ: bw=86.7KiB/s (88.8kB/s), 86.7KiB/s-86.7KiB/s (88.8kB/s-88.8kB/s), io=88.0KiB (90.1kB), run=1015-1015msec 00:15:44.047 WRITE: bw=2018KiB/s (2066kB/s), 2018KiB/s-2018KiB/s (2066kB/s-2066kB/s), io=2048KiB (2097kB), run=1015-1015msec 00:15:44.047 00:15:44.047 Disk stats (read/write): 00:15:44.047 nvme0n1: ios=45/512, merge=0/0, ticks=1766/70, in_queue=1836, util=98.40% 00:15:44.047 18:30:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:44.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:44.306 18:30:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:44.306 18:30:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:15:44.306 18:30:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:44.306 18:30:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:44.306 18:30:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:44.306 18:30:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:44.306 18:30:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:15:44.306 18:30:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:44.306 18:30:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:15:44.306 18:30:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:44.306 18:30:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:15:44.306 18:30:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:44.306 18:30:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:15:44.306 18:30:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:44.306 18:30:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:44.306 rmmod nvme_tcp 00:15:44.306 rmmod nvme_fabrics 00:15:44.306 rmmod nvme_keyring 00:15:44.306 18:30:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:44.306 18:30:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:15:44.306 18:30:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:15:44.306 18:30:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3896506 ']' 00:15:44.306 18:30:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3896506 00:15:44.306 18:30:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 3896506 ']' 00:15:44.306 18:30:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 3896506 00:15:44.306 18:30:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:15:44.306 18:30:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:44.306 18:30:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3896506 00:15:44.565 18:30:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:44.565 18:30:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:44.565 18:30:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3896506' 00:15:44.565 killing process with pid 3896506 00:15:44.565 18:30:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 3896506 00:15:44.565 18:30:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 3896506 00:15:44.565 18:30:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:44.565 18:30:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:44.565 18:30:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:44.565 18:30:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:44.565 18:30:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:44.565 18:30:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.565 18:30:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:44.565 18:30:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.100 18:30:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:47.100 00:15:47.100 real 0m15.127s 00:15:47.100 user 0m34.751s 00:15:47.100 sys 0m5.040s 00:15:47.100 18:30:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:47.100 18:30:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:47.100 ************************************ 00:15:47.100 END TEST nvmf_nmic 00:15:47.100 ************************************ 00:15:47.100 18:30:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:47.100 18:30:32 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:47.100 18:30:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:47.100 18:30:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:47.100 18:30:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:47.100 ************************************ 00:15:47.100 START TEST nvmf_fio_target 00:15:47.100 ************************************ 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:47.100 * Looking for test storage... 00:15:47.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:47.100 18:30:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:52.426 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:52.426 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:52.426 Found net devices under 0000:86:00.0: cvl_0_0 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:52.426 Found net devices under 0000:86:00.1: cvl_0_1 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:52.426 18:30:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:52.686 18:30:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:52.686 18:30:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:52.686 18:30:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:52.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:52.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:15:52.686 00:15:52.686 --- 10.0.0.2 ping statistics --- 00:15:52.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.686 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:15:52.686 18:30:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:52.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:52.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:15:52.686 00:15:52.686 --- 10.0.0.1 ping statistics --- 00:15:52.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.686 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:15:52.686 18:30:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:52.686 18:30:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:15:52.686 18:30:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:52.687 18:30:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:52.687 18:30:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:52.687 18:30:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:52.687 18:30:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:52.687 18:30:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:52.687 18:30:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:52.687 18:30:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:52.687 18:30:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:52.687 18:30:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:52.687 18:30:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.687 18:30:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3901298 00:15:52.687 18:30:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3901298 00:15:52.687 18:30:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:52.687 18:30:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 3901298 ']' 00:15:52.687 18:30:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.687 18:30:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:52.687 18:30:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.687 18:30:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:52.687 18:30:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.687 [2024-07-15 18:30:38.163681] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:15:52.687 [2024-07-15 18:30:38.163727] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.687 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.687 [2024-07-15 18:30:38.235358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:52.945 [2024-07-15 18:30:38.314563] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.945 [2024-07-15 18:30:38.314600] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.945 [2024-07-15 18:30:38.314607] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:52.945 [2024-07-15 18:30:38.314613] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:52.945 [2024-07-15 18:30:38.314618] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.945 [2024-07-15 18:30:38.314732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.945 [2024-07-15 18:30:38.314840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:52.945 [2024-07-15 18:30:38.314943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.945 [2024-07-15 18:30:38.314945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:53.512 18:30:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:53.512 18:30:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:15:53.512 18:30:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:53.512 18:30:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:53.512 18:30:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.512 18:30:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.512 18:30:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:53.770 [2024-07-15 18:30:39.159766] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.770 18:30:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:54.029 18:30:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:54.029 18:30:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:54.289 18:30:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:54.289 18:30:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:54.289 18:30:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:54.289 18:30:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:54.548 18:30:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:54.548 18:30:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:54.805 18:30:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:55.061 18:30:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:55.061 18:30:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:55.061 18:30:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:55.061 18:30:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:55.319 18:30:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:55.319 18:30:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:55.577 18:30:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:55.577 18:30:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:55.577 18:30:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:55.836 18:30:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:55.836 18:30:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:56.094 18:30:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:56.094 [2024-07-15 18:30:41.604683] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.094 18:30:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:56.353 18:30:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:56.611 18:30:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:57.542 18:30:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:57.542 18:30:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:15:57.542 18:30:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:57.542 18:30:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:15:57.542 18:30:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:15:57.542 18:30:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:16:00.073 18:30:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:00.073 18:30:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:00.073 18:30:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:00.073 18:30:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:16:00.073 18:30:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:00.073 18:30:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:16:00.073 18:30:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:00.073 [global] 00:16:00.073 thread=1 00:16:00.073 invalidate=1 00:16:00.073 rw=write 00:16:00.073 time_based=1 00:16:00.073 runtime=1 00:16:00.073 ioengine=libaio 00:16:00.073 direct=1 00:16:00.073 bs=4096 00:16:00.073 iodepth=1 00:16:00.073 norandommap=0 00:16:00.073 numjobs=1 00:16:00.073 00:16:00.073 verify_dump=1 00:16:00.073 verify_backlog=512 00:16:00.073 verify_state_save=0 00:16:00.073 do_verify=1 00:16:00.073 verify=crc32c-intel 00:16:00.073 [job0] 00:16:00.073 filename=/dev/nvme0n1 00:16:00.073 [job1] 00:16:00.073 filename=/dev/nvme0n2 00:16:00.073 [job2] 00:16:00.073 filename=/dev/nvme0n3 00:16:00.073 [job3] 00:16:00.073 filename=/dev/nvme0n4 00:16:00.073 Could not set queue depth (nvme0n1) 00:16:00.073 Could not set queue depth (nvme0n2) 00:16:00.073 Could not set queue depth (nvme0n3) 00:16:00.073 Could not set queue depth (nvme0n4) 00:16:00.073 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:00.073 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:00.073 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:00.073 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:00.073 fio-3.35 00:16:00.073 Starting 4 threads 00:16:01.472 00:16:01.472 job0: (groupid=0, jobs=1): err= 0: pid=3902690: Mon Jul 15 18:30:46 2024 00:16:01.472 read: IOPS=21, BW=87.5KiB/s (89.6kB/s)(88.0KiB/1006msec) 00:16:01.472 slat (nsec): min=10699, max=23085, avg=21211.95, stdev=2472.28 00:16:01.472 clat (usec): min=40796, max=41124, avg=40968.97, stdev=62.57 00:16:01.472 lat (usec): min=40819, max=41134, avg=40990.18, stdev=61.22 00:16:01.472 clat percentiles (usec): 00:16:01.472 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:16:01.472 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:01.472 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:01.472 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:01.472 | 99.99th=[41157] 00:16:01.472 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:16:01.472 slat (nsec): min=11222, max=40375, avg=13241.54, stdev=2105.09 00:16:01.472 clat (usec): min=125, max=342, avg=185.36, stdev=23.33 00:16:01.472 lat (usec): min=137, max=383, avg=198.61, stdev=23.94 00:16:01.472 clat percentiles (usec): 00:16:01.472 | 1.00th=[ 147], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:16:01.472 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 186], 00:16:01.472 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 210], 95.00th=[ 227], 00:16:01.472 | 99.00th=[ 265], 99.50th=[ 293], 99.90th=[ 343], 99.95th=[ 343], 00:16:01.472 | 99.99th=[ 343] 00:16:01.472 bw ( KiB/s): min= 4096, max= 4096, per=25.83%, avg=4096.00, stdev= 0.00, samples=1 00:16:01.472 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:01.472 lat (usec) : 250=93.26%, 500=2.62% 00:16:01.472 lat (msec) : 50=4.12% 00:16:01.472 cpu : usr=0.50%, sys=1.00%, ctx=534, majf=0, minf=1 00:16:01.472 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:01.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.472 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.472 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:01.472 job1: (groupid=0, jobs=1): err= 0: pid=3902691: Mon Jul 15 18:30:46 2024 00:16:01.472 read: IOPS=23, BW=92.9KiB/s (95.2kB/s)(96.0KiB/1033msec) 00:16:01.472 slat (nsec): min=9534, max=25060, avg=21364.21, stdev=3744.35 00:16:01.472 clat (usec): min=367, max=41113, avg=39248.89, stdev=8282.48 00:16:01.472 lat (usec): min=392, max=41134, avg=39270.26, stdev=8281.74 00:16:01.472 clat percentiles (usec): 00:16:01.472 | 1.00th=[ 367], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:16:01.472 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:01.472 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:01.472 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:01.472 | 99.99th=[41157] 00:16:01.472 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:16:01.472 slat (nsec): min=9168, max=36861, avg=10520.76, stdev=1668.86 00:16:01.472 clat (usec): min=127, max=291, avg=162.61, stdev=17.96 00:16:01.472 lat (usec): min=137, max=328, avg=173.13, stdev=18.40 00:16:01.472 clat percentiles (usec): 00:16:01.472 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:16:01.472 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 165], 00:16:01.472 | 70.00th=[ 172], 80.00th=[ 180], 90.00th=[ 188], 95.00th=[ 194], 00:16:01.472 | 99.00th=[ 204], 99.50th=[ 212], 99.90th=[ 293], 99.95th=[ 293], 00:16:01.472 | 99.99th=[ 293] 00:16:01.472 bw ( KiB/s): min= 4096, max= 4096, per=25.83%, avg=4096.00, stdev= 0.00, samples=1 00:16:01.472 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:01.472 lat (usec) : 250=95.34%, 500=0.37% 00:16:01.472 lat (msec) : 50=4.29% 00:16:01.472 cpu : usr=0.19%, sys=0.58%, ctx=536, majf=0, minf=1 00:16:01.472 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:01.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.472 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.472 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:01.472 job2: (groupid=0, jobs=1): err= 0: pid=3902692: Mon Jul 15 18:30:46 2024 00:16:01.472 read: IOPS=50, BW=201KiB/s (205kB/s)(204KiB/1017msec) 00:16:01.472 slat (nsec): min=8911, max=34362, avg=13163.47, stdev=5197.60 00:16:01.472 clat (usec): min=196, max=41996, avg=17867.55, stdev=20438.96 00:16:01.472 lat (usec): min=208, max=42008, avg=17880.71, stdev=20440.38 00:16:01.472 clat percentiles (usec): 00:16:01.472 | 1.00th=[ 198], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 223], 00:16:01.472 | 30.00th=[ 231], 40.00th=[ 245], 50.00th=[ 269], 60.00th=[41157], 00:16:01.472 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:16:01.472 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:01.472 | 99.99th=[42206] 00:16:01.473 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:16:01.473 slat (nsec): min=11053, max=41853, avg=13434.00, stdev=3778.90 00:16:01.473 clat (usec): min=136, max=1955, avg=188.34, stdev=81.14 00:16:01.473 lat (usec): min=156, max=1969, avg=201.77, stdev=81.17 00:16:01.473 clat percentiles (usec): 00:16:01.473 | 1.00th=[ 149], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 169], 00:16:01.473 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:16:01.473 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 210], 95.00th=[ 225], 00:16:01.473 | 99.00th=[ 269], 99.50th=[ 310], 99.90th=[ 1958], 99.95th=[ 1958], 00:16:01.473 | 99.99th=[ 1958] 00:16:01.473 bw ( KiB/s): min= 4096, max= 4096, per=25.83%, avg=4096.00, stdev= 0.00, samples=1 00:16:01.473 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:01.473 lat (usec) : 250=93.43%, 500=2.49% 00:16:01.473 lat (msec) : 2=0.18%, 50=3.91% 00:16:01.473 cpu : usr=0.69%, sys=0.69%, ctx=565, majf=0, minf=1 00:16:01.473 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:01.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.473 issued rwts: total=51,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.473 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:01.473 job3: (groupid=0, jobs=1): err= 0: pid=3902693: Mon Jul 15 18:30:46 2024 00:16:01.473 read: IOPS=2240, BW=8963KiB/s (9178kB/s)(8972KiB/1001msec) 00:16:01.473 slat (nsec): min=6474, max=27599, avg=7275.92, stdev=763.33 00:16:01.473 clat (usec): min=171, max=480, avg=249.34, stdev=20.76 00:16:01.473 lat (usec): min=182, max=487, avg=256.61, stdev=20.75 00:16:01.473 clat percentiles (usec): 00:16:01.473 | 1.00th=[ 198], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 241], 00:16:01.473 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 251], 00:16:01.473 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 262], 95.00th=[ 265], 00:16:01.473 | 99.00th=[ 375], 99.50th=[ 408], 99.90th=[ 420], 99.95th=[ 424], 00:16:01.473 | 99.99th=[ 482] 00:16:01.473 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:16:01.473 slat (nsec): min=9398, max=47716, avg=10638.48, stdev=1392.38 00:16:01.473 clat (usec): min=110, max=354, avg=150.88, stdev=40.65 00:16:01.473 lat (usec): min=120, max=388, avg=161.52, stdev=41.02 00:16:01.473 clat percentiles (usec): 00:16:01.473 | 1.00th=[ 115], 5.00th=[ 120], 10.00th=[ 122], 20.00th=[ 126], 00:16:01.473 | 30.00th=[ 128], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 137], 00:16:01.473 | 70.00th=[ 145], 80.00th=[ 172], 90.00th=[ 239], 95.00th=[ 243], 00:16:01.473 | 99.00th=[ 251], 99.50th=[ 262], 99.90th=[ 310], 99.95th=[ 343], 00:16:01.473 | 99.99th=[ 355] 00:16:01.473 bw ( KiB/s): min=10256, max=10256, per=64.66%, avg=10256.00, stdev= 0.00, samples=1 00:16:01.473 iops : min= 2564, max= 2564, avg=2564.00, stdev= 0.00, samples=1 00:16:01.473 lat (usec) : 250=80.60%, 500=19.40% 00:16:01.473 cpu : usr=2.60%, sys=4.30%, ctx=4804, majf=0, minf=2 00:16:01.473 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:01.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.473 issued rwts: total=2243,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.473 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:01.473 00:16:01.473 Run status group 0 (all jobs): 00:16:01.473 READ: bw=9061KiB/s (9278kB/s), 87.5KiB/s-8963KiB/s (89.6kB/s-9178kB/s), io=9360KiB (9585kB), run=1001-1033msec 00:16:01.473 WRITE: bw=15.5MiB/s (16.2MB/s), 1983KiB/s-9.99MiB/s (2030kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1033msec 00:16:01.473 00:16:01.473 Disk stats (read/write): 00:16:01.473 nvme0n1: ios=67/512, merge=0/0, ticks=724/85, in_queue=809, util=81.96% 00:16:01.473 nvme0n2: ios=42/512, merge=0/0, ticks=753/81, in_queue=834, util=86.32% 00:16:01.473 nvme0n3: ios=103/512, merge=0/0, ticks=944/87, in_queue=1031, util=97.61% 00:16:01.473 nvme0n4: ios=1810/2048, merge=0/0, ticks=1338/305, in_queue=1643, util=98.56% 00:16:01.473 18:30:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:01.473 [global] 00:16:01.473 thread=1 00:16:01.473 invalidate=1 00:16:01.473 rw=randwrite 00:16:01.473 time_based=1 00:16:01.473 runtime=1 00:16:01.473 ioengine=libaio 00:16:01.473 direct=1 00:16:01.473 bs=4096 00:16:01.473 iodepth=1 00:16:01.473 norandommap=0 00:16:01.473 numjobs=1 00:16:01.473 00:16:01.473 verify_dump=1 00:16:01.473 verify_backlog=512 00:16:01.473 verify_state_save=0 00:16:01.473 do_verify=1 00:16:01.473 verify=crc32c-intel 00:16:01.473 [job0] 00:16:01.473 filename=/dev/nvme0n1 00:16:01.473 [job1] 00:16:01.473 filename=/dev/nvme0n2 00:16:01.473 [job2] 00:16:01.473 filename=/dev/nvme0n3 00:16:01.473 [job3] 00:16:01.473 filename=/dev/nvme0n4 00:16:01.473 Could not set queue depth (nvme0n1) 00:16:01.473 Could not set queue depth (nvme0n2) 00:16:01.473 Could not set queue depth (nvme0n3) 00:16:01.473 Could not set queue depth (nvme0n4) 00:16:01.729 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:01.729 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:01.729 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:01.729 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:01.729 fio-3.35 00:16:01.729 Starting 4 threads 00:16:03.095 00:16:03.095 job0: (groupid=0, jobs=1): err= 0: pid=3903065: Mon Jul 15 18:30:48 2024 00:16:03.095 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:16:03.095 slat (nsec): min=7396, max=26517, avg=8790.19, stdev=2974.91 00:16:03.095 clat (usec): min=184, max=41942, avg=1533.70, stdev=6856.56 00:16:03.095 lat (usec): min=193, max=41965, avg=1542.49, stdev=6858.53 00:16:03.095 clat percentiles (usec): 00:16:03.095 | 1.00th=[ 217], 5.00th=[ 229], 10.00th=[ 239], 20.00th=[ 269], 00:16:03.095 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 322], 60.00th=[ 355], 00:16:03.095 | 70.00th=[ 383], 80.00th=[ 420], 90.00th=[ 502], 95.00th=[ 553], 00:16:03.095 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:16:03.095 | 99.99th=[41681] 00:16:03.095 write: IOPS=872, BW=3489KiB/s (3572kB/s)(3492KiB/1001msec); 0 zone resets 00:16:03.095 slat (nsec): min=7339, max=39165, avg=11992.15, stdev=1777.38 00:16:03.095 clat (usec): min=123, max=439, avg=223.64, stdev=54.40 00:16:03.095 lat (usec): min=135, max=451, avg=235.63, stdev=54.55 00:16:03.095 clat percentiles (usec): 00:16:03.095 | 1.00th=[ 135], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 161], 00:16:03.095 | 30.00th=[ 190], 40.00th=[ 215], 50.00th=[ 231], 60.00th=[ 241], 00:16:03.095 | 70.00th=[ 251], 80.00th=[ 265], 90.00th=[ 285], 95.00th=[ 322], 00:16:03.095 | 99.00th=[ 367], 99.50th=[ 392], 99.90th=[ 441], 99.95th=[ 441], 00:16:03.095 | 99.99th=[ 441] 00:16:03.095 bw ( KiB/s): min= 4096, max= 4096, per=22.85%, avg=4096.00, stdev= 0.00, samples=1 00:16:03.095 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:03.095 lat (usec) : 250=48.59%, 500=47.65%, 750=2.60% 00:16:03.095 lat (msec) : 4=0.07%, 50=1.08% 00:16:03.095 cpu : usr=1.80%, sys=1.80%, ctx=1386, majf=0, minf=1 00:16:03.095 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:03.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.095 issued rwts: total=512,873,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.095 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:03.095 job1: (groupid=0, jobs=1): err= 0: pid=3903068: Mon Jul 15 18:30:48 2024 00:16:03.095 read: IOPS=284, BW=1138KiB/s (1166kB/s)(1176KiB/1033msec) 00:16:03.095 slat (nsec): min=6812, max=34030, avg=8627.85, stdev=3876.66 00:16:03.095 clat (usec): min=200, max=42003, avg=3106.64, stdev=10397.39 00:16:03.095 lat (usec): min=208, max=42026, avg=3115.27, stdev=10400.49 00:16:03.095 clat percentiles (usec): 00:16:03.095 | 1.00th=[ 210], 5.00th=[ 223], 10.00th=[ 225], 20.00th=[ 229], 00:16:03.096 | 30.00th=[ 233], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 243], 00:16:03.096 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 262], 95.00th=[41157], 00:16:03.096 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:16:03.096 | 99.99th=[42206] 00:16:03.096 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:16:03.096 slat (nsec): min=9108, max=46910, avg=10972.63, stdev=2657.86 00:16:03.096 clat (usec): min=137, max=393, avg=212.88, stdev=47.39 00:16:03.096 lat (usec): min=154, max=404, avg=223.85, stdev=46.85 00:16:03.096 clat percentiles (usec): 00:16:03.096 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 169], 00:16:03.096 | 30.00th=[ 176], 40.00th=[ 186], 50.00th=[ 204], 60.00th=[ 229], 00:16:03.096 | 70.00th=[ 243], 80.00th=[ 253], 90.00th=[ 281], 95.00th=[ 293], 00:16:03.096 | 99.00th=[ 338], 99.50th=[ 359], 99.90th=[ 396], 99.95th=[ 396], 00:16:03.096 | 99.99th=[ 396] 00:16:03.096 bw ( KiB/s): min= 4096, max= 4096, per=22.85%, avg=4096.00, stdev= 0.00, samples=1 00:16:03.096 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:03.096 lat (usec) : 250=77.42%, 500=19.98% 00:16:03.096 lat (msec) : 50=2.61% 00:16:03.096 cpu : usr=0.58%, sys=0.97%, ctx=807, majf=0, minf=2 00:16:03.096 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:03.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.096 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.096 issued rwts: total=294,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.096 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:03.096 job2: (groupid=0, jobs=1): err= 0: pid=3903069: Mon Jul 15 18:30:48 2024 00:16:03.096 read: IOPS=2370, BW=9483KiB/s (9710kB/s)(9492KiB/1001msec) 00:16:03.096 slat (usec): min=6, max=100, avg= 8.13, stdev= 2.68 00:16:03.096 clat (usec): min=161, max=1601, avg=225.89, stdev=61.81 00:16:03.096 lat (usec): min=168, max=1609, avg=234.02, stdev=62.02 00:16:03.096 clat percentiles (usec): 00:16:03.096 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 188], 00:16:03.096 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 223], 00:16:03.096 | 70.00th=[ 235], 80.00th=[ 251], 90.00th=[ 306], 95.00th=[ 347], 00:16:03.096 | 99.00th=[ 371], 99.50th=[ 383], 99.90th=[ 553], 99.95th=[ 1221], 00:16:03.096 | 99.99th=[ 1598] 00:16:03.096 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:16:03.096 slat (nsec): min=9958, max=52973, avg=11575.47, stdev=2343.99 00:16:03.096 clat (usec): min=120, max=265, avg=156.16, stdev=25.37 00:16:03.096 lat (usec): min=131, max=299, avg=167.73, stdev=25.65 00:16:03.096 clat percentiles (usec): 00:16:03.096 | 1.00th=[ 125], 5.00th=[ 129], 10.00th=[ 131], 20.00th=[ 135], 00:16:03.096 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 149], 60.00th=[ 159], 00:16:03.096 | 70.00th=[ 169], 80.00th=[ 180], 90.00th=[ 188], 95.00th=[ 200], 00:16:03.096 | 99.00th=[ 241], 99.50th=[ 245], 99.90th=[ 262], 99.95th=[ 265], 00:16:03.096 | 99.99th=[ 265] 00:16:03.096 bw ( KiB/s): min=12120, max=12120, per=67.62%, avg=12120.00, stdev= 0.00, samples=1 00:16:03.096 iops : min= 3030, max= 3030, avg=3030.00, stdev= 0.00, samples=1 00:16:03.096 lat (usec) : 250=90.05%, 500=9.89%, 750=0.02% 00:16:03.096 lat (msec) : 2=0.04% 00:16:03.096 cpu : usr=3.40%, sys=8.40%, ctx=4934, majf=0, minf=1 00:16:03.096 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:03.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.096 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.096 issued rwts: total=2373,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.096 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:03.096 job3: (groupid=0, jobs=1): err= 0: pid=3903070: Mon Jul 15 18:30:48 2024 00:16:03.096 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:16:03.096 slat (nsec): min=7206, max=27499, avg=8600.24, stdev=2934.27 00:16:03.096 clat (usec): min=227, max=41981, avg=1614.38, stdev=7088.85 00:16:03.096 lat (usec): min=237, max=42001, avg=1622.98, stdev=7091.21 00:16:03.096 clat percentiles (usec): 00:16:03.096 | 1.00th=[ 269], 5.00th=[ 281], 10.00th=[ 285], 20.00th=[ 293], 00:16:03.096 | 30.00th=[ 306], 40.00th=[ 318], 50.00th=[ 334], 60.00th=[ 347], 00:16:03.096 | 70.00th=[ 359], 80.00th=[ 392], 90.00th=[ 449], 95.00th=[ 490], 00:16:03.096 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:16:03.096 | 99.99th=[42206] 00:16:03.096 write: IOPS=683, BW=2733KiB/s (2799kB/s)(2736KiB/1001msec); 0 zone resets 00:16:03.096 slat (nsec): min=10266, max=39392, avg=12701.24, stdev=2707.40 00:16:03.096 clat (usec): min=125, max=475, avg=228.80, stdev=44.59 00:16:03.096 lat (usec): min=136, max=486, avg=241.50, stdev=45.46 00:16:03.096 clat percentiles (usec): 00:16:03.096 | 1.00th=[ 131], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 196], 00:16:03.096 | 30.00th=[ 217], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 241], 00:16:03.096 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 277], 95.00th=[ 297], 00:16:03.096 | 99.00th=[ 355], 99.50th=[ 383], 99.90th=[ 478], 99.95th=[ 478], 00:16:03.096 | 99.99th=[ 478] 00:16:03.096 bw ( KiB/s): min= 4096, max= 4096, per=22.85%, avg=4096.00, stdev= 0.00, samples=1 00:16:03.096 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:03.096 lat (usec) : 250=44.73%, 500=53.51%, 750=0.42% 00:16:03.096 lat (msec) : 50=1.34% 00:16:03.096 cpu : usr=1.10%, sys=1.90%, ctx=1197, majf=0, minf=1 00:16:03.096 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:03.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.096 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.096 issued rwts: total=512,684,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.096 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:03.096 00:16:03.096 Run status group 0 (all jobs): 00:16:03.096 READ: bw=14.0MiB/s (14.6MB/s), 1138KiB/s-9483KiB/s (1166kB/s-9710kB/s), io=14.4MiB (15.1MB), run=1001-1033msec 00:16:03.096 WRITE: bw=17.5MiB/s (18.4MB/s), 1983KiB/s-9.99MiB/s (2030kB/s-10.5MB/s), io=18.1MiB (19.0MB), run=1001-1033msec 00:16:03.096 00:16:03.096 Disk stats (read/write): 00:16:03.096 nvme0n1: ios=335/512, merge=0/0, ticks=1643/120, in_queue=1763, util=97.60% 00:16:03.096 nvme0n2: ios=328/512, merge=0/0, ticks=1694/107, in_queue=1801, util=96.75% 00:16:03.096 nvme0n3: ios=2048/2362, merge=0/0, ticks=418/340, in_queue=758, util=89.06% 00:16:03.096 nvme0n4: ios=198/512, merge=0/0, ticks=890/115, in_queue=1005, util=90.67% 00:16:03.096 18:30:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:03.096 [global] 00:16:03.096 thread=1 00:16:03.096 invalidate=1 00:16:03.096 rw=write 00:16:03.096 time_based=1 00:16:03.096 runtime=1 00:16:03.096 ioengine=libaio 00:16:03.096 direct=1 00:16:03.096 bs=4096 00:16:03.096 iodepth=128 00:16:03.096 norandommap=0 00:16:03.096 numjobs=1 00:16:03.096 00:16:03.096 verify_dump=1 00:16:03.096 verify_backlog=512 00:16:03.096 verify_state_save=0 00:16:03.096 do_verify=1 00:16:03.096 verify=crc32c-intel 00:16:03.096 [job0] 00:16:03.096 filename=/dev/nvme0n1 00:16:03.096 [job1] 00:16:03.096 filename=/dev/nvme0n2 00:16:03.096 [job2] 00:16:03.096 filename=/dev/nvme0n3 00:16:03.096 [job3] 00:16:03.096 filename=/dev/nvme0n4 00:16:03.096 Could not set queue depth (nvme0n1) 00:16:03.096 Could not set queue depth (nvme0n2) 00:16:03.096 Could not set queue depth (nvme0n3) 00:16:03.096 Could not set queue depth (nvme0n4) 00:16:03.096 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:03.096 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:03.096 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:03.096 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:03.096 fio-3.35 00:16:03.096 Starting 4 threads 00:16:04.473 00:16:04.473 job0: (groupid=0, jobs=1): err= 0: pid=3903439: Mon Jul 15 18:30:49 2024 00:16:04.473 read: IOPS=4011, BW=15.7MiB/s (16.4MB/s)(16.0MiB/1021msec) 00:16:04.473 slat (nsec): min=1017, max=9130.2k, avg=86787.79, stdev=595713.24 00:16:04.473 clat (usec): min=3978, max=24521, avg=10800.44, stdev=3486.10 00:16:04.473 lat (usec): min=3999, max=24523, avg=10887.23, stdev=3533.63 00:16:04.473 clat percentiles (usec): 00:16:04.473 | 1.00th=[ 5932], 5.00th=[ 7177], 10.00th=[ 7963], 20.00th=[ 8455], 00:16:04.473 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9896], 00:16:04.473 | 70.00th=[11600], 80.00th=[13173], 90.00th=[16057], 95.00th=[18482], 00:16:04.473 | 99.00th=[21890], 99.50th=[22938], 99.90th=[24511], 99.95th=[24511], 00:16:04.473 | 99.99th=[24511] 00:16:04.473 write: IOPS=4378, BW=17.1MiB/s (17.9MB/s)(17.5MiB/1021msec); 0 zone resets 00:16:04.473 slat (usec): min=2, max=10943, avg=136.86, stdev=750.02 00:16:04.473 clat (usec): min=715, max=84618, avg=19078.65, stdev=16840.04 00:16:04.473 lat (usec): min=1087, max=84633, avg=19215.50, stdev=16939.51 00:16:04.473 clat percentiles (usec): 00:16:04.473 | 1.00th=[ 2057], 5.00th=[ 4490], 10.00th=[ 6325], 20.00th=[ 7439], 00:16:04.473 | 30.00th=[ 8029], 40.00th=[ 9241], 50.00th=[13960], 60.00th=[16581], 00:16:04.473 | 70.00th=[19792], 80.00th=[27657], 90.00th=[46400], 95.00th=[51119], 00:16:04.473 | 99.00th=[83362], 99.50th=[83362], 99.90th=[84411], 99.95th=[84411], 00:16:04.473 | 99.99th=[84411] 00:16:04.473 bw ( KiB/s): min=16768, max=17932, per=25.29%, avg=17350.00, stdev=823.07, samples=2 00:16:04.473 iops : min= 4192, max= 4483, avg=4337.50, stdev=205.77, samples=2 00:16:04.473 lat (usec) : 750=0.01% 00:16:04.473 lat (msec) : 2=0.46%, 4=1.54%, 10=50.51%, 20=30.73%, 50=13.74% 00:16:04.473 lat (msec) : 100=3.01% 00:16:04.473 cpu : usr=3.92%, sys=3.82%, ctx=458, majf=0, minf=1 00:16:04.473 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:04.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:04.473 issued rwts: total=4096,4470,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.473 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:04.473 job1: (groupid=0, jobs=1): err= 0: pid=3903440: Mon Jul 15 18:30:49 2024 00:16:04.473 read: IOPS=4627, BW=18.1MiB/s (19.0MB/s)(18.3MiB/1010msec) 00:16:04.473 slat (nsec): min=1006, max=15331k, avg=118791.16, stdev=807157.41 00:16:04.473 clat (usec): min=3701, max=55107, avg=12610.75, stdev=8736.39 00:16:04.473 lat (usec): min=4388, max=55114, avg=12729.54, stdev=8820.61 00:16:04.473 clat percentiles (usec): 00:16:04.473 | 1.00th=[ 5800], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 8979], 00:16:04.473 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10028], 00:16:04.473 | 70.00th=[10290], 80.00th=[11338], 90.00th=[19792], 95.00th=[34341], 00:16:04.473 | 99.00th=[52167], 99.50th=[53740], 99.90th=[55313], 99.95th=[55313], 00:16:04.473 | 99.99th=[55313] 00:16:04.473 write: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1010msec); 0 zone resets 00:16:04.473 slat (nsec): min=1848, max=18731k, avg=82159.08, stdev=421402.61 00:16:04.473 clat (usec): min=711, max=55093, avg=13476.55, stdev=8351.38 00:16:04.473 lat (usec): min=720, max=55098, avg=13558.70, stdev=8375.55 00:16:04.473 clat percentiles (usec): 00:16:04.473 | 1.00th=[ 2769], 5.00th=[ 4621], 10.00th=[ 7046], 20.00th=[ 9241], 00:16:04.473 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10421], 60.00th=[11469], 00:16:04.473 | 70.00th=[16057], 80.00th=[16909], 90.00th=[20055], 95.00th=[30802], 00:16:04.473 | 99.00th=[47973], 99.50th=[48497], 99.90th=[53216], 99.95th=[54789], 00:16:04.473 | 99.99th=[55313] 00:16:04.473 bw ( KiB/s): min=16352, max=24112, per=29.49%, avg=20232.00, stdev=5487.15, samples=2 00:16:04.473 iops : min= 4088, max= 6028, avg=5058.00, stdev=1371.79, samples=2 00:16:04.473 lat (usec) : 750=0.05% 00:16:04.473 lat (msec) : 2=0.16%, 4=1.39%, 10=44.88%, 20=43.65%, 50=8.92% 00:16:04.473 lat (msec) : 100=0.94% 00:16:04.473 cpu : usr=3.17%, sys=5.05%, ctx=541, majf=0, minf=1 00:16:04.473 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:04.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:04.473 issued rwts: total=4674,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.473 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:04.473 job2: (groupid=0, jobs=1): err= 0: pid=3903441: Mon Jul 15 18:30:49 2024 00:16:04.473 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:16:04.473 slat (nsec): min=1403, max=14342k, avg=108109.94, stdev=748919.83 00:16:04.473 clat (usec): min=4379, max=33233, avg=12808.81, stdev=4531.60 00:16:04.474 lat (usec): min=4403, max=33258, avg=12916.92, stdev=4582.13 00:16:04.474 clat percentiles (usec): 00:16:04.474 | 1.00th=[ 5735], 5.00th=[ 7898], 10.00th=[ 8979], 20.00th=[ 9634], 00:16:04.474 | 30.00th=[ 9896], 40.00th=[10552], 50.00th=[11338], 60.00th=[11863], 00:16:04.474 | 70.00th=[13304], 80.00th=[15795], 90.00th=[19530], 95.00th=[22938], 00:16:04.474 | 99.00th=[27132], 99.50th=[28181], 99.90th=[30540], 99.95th=[30540], 00:16:04.474 | 99.99th=[33162] 00:16:04.474 write: IOPS=4057, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1009msec); 0 zone resets 00:16:04.474 slat (usec): min=2, max=21824, avg=144.55, stdev=713.88 00:16:04.474 clat (usec): min=2293, max=44203, avg=19385.64, stdev=8958.28 00:16:04.474 lat (usec): min=2303, max=44211, avg=19530.19, stdev=9004.20 00:16:04.474 clat percentiles (usec): 00:16:04.474 | 1.00th=[ 3884], 5.00th=[ 8717], 10.00th=[10159], 20.00th=[10683], 00:16:04.474 | 30.00th=[15664], 40.00th=[16581], 50.00th=[16909], 60.00th=[17695], 00:16:04.474 | 70.00th=[21365], 80.00th=[26870], 90.00th=[34341], 95.00th=[38011], 00:16:04.474 | 99.00th=[43254], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:16:04.474 | 99.99th=[44303] 00:16:04.474 bw ( KiB/s): min=15368, max=16368, per=23.13%, avg=15868.00, stdev=707.11, samples=2 00:16:04.474 iops : min= 3842, max= 4092, avg=3967.00, stdev=176.78, samples=2 00:16:04.474 lat (msec) : 4=0.68%, 10=18.64%, 20=57.87%, 50=22.82% 00:16:04.474 cpu : usr=3.87%, sys=3.17%, ctx=517, majf=0, minf=1 00:16:04.474 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:04.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.474 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:04.474 issued rwts: total=3584,4094,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.474 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:04.474 job3: (groupid=0, jobs=1): err= 0: pid=3903442: Mon Jul 15 18:30:49 2024 00:16:04.474 read: IOPS=3510, BW=13.7MiB/s (14.4MB/s)(14.0MiB/1021msec) 00:16:04.474 slat (nsec): min=1323, max=8594.8k, avg=116699.27, stdev=722352.12 00:16:04.474 clat (usec): min=4076, max=47488, avg=11829.01, stdev=6095.34 00:16:04.474 lat (usec): min=4086, max=47499, avg=11945.71, stdev=6202.69 00:16:04.474 clat percentiles (usec): 00:16:04.474 | 1.00th=[ 5604], 5.00th=[ 9110], 10.00th=[ 9110], 20.00th=[ 9241], 00:16:04.474 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10552], 00:16:04.474 | 70.00th=[10945], 80.00th=[11600], 90.00th=[14877], 95.00th=[22676], 00:16:04.474 | 99.00th=[44303], 99.50th=[45351], 99.90th=[47449], 99.95th=[47449], 00:16:04.474 | 99.99th=[47449] 00:16:04.474 write: IOPS=3751, BW=14.7MiB/s (15.4MB/s)(15.0MiB/1021msec); 0 zone resets 00:16:04.474 slat (usec): min=2, max=50543, avg=148.08, stdev=1397.31 00:16:04.474 clat (msec): min=2, max=132, avg=18.68, stdev=17.41 00:16:04.474 lat (msec): min=2, max=132, avg=18.83, stdev=17.60 00:16:04.474 clat percentiles (msec): 00:16:04.474 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:16:04.474 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 16], 00:16:04.474 | 70.00th=[ 17], 80.00th=[ 23], 90.00th=[ 44], 95.00th=[ 66], 00:16:04.474 | 99.00th=[ 79], 99.50th=[ 80], 99.90th=[ 95], 99.95th=[ 133], 00:16:04.474 | 99.99th=[ 133] 00:16:04.474 bw ( KiB/s): min=13232, max=16384, per=21.58%, avg=14808.00, stdev=2228.80, samples=2 00:16:04.474 iops : min= 3308, max= 4096, avg=3702.00, stdev=557.20, samples=2 00:16:04.474 lat (msec) : 4=0.85%, 10=41.89%, 20=43.53%, 50=10.10%, 100=3.59% 00:16:04.474 lat (msec) : 250=0.04% 00:16:04.474 cpu : usr=3.04%, sys=3.92%, ctx=378, majf=0, minf=1 00:16:04.474 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:04.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.474 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:04.474 issued rwts: total=3584,3830,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.474 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:04.474 00:16:04.474 Run status group 0 (all jobs): 00:16:04.474 READ: bw=61.0MiB/s (63.9MB/s), 13.7MiB/s-18.1MiB/s (14.4MB/s-19.0MB/s), io=62.3MiB (65.3MB), run=1009-1021msec 00:16:04.474 WRITE: bw=67.0MiB/s (70.3MB/s), 14.7MiB/s-19.8MiB/s (15.4MB/s-20.8MB/s), io=68.4MiB (71.7MB), run=1009-1021msec 00:16:04.474 00:16:04.474 Disk stats (read/write): 00:16:04.474 nvme0n1: ios=3619/3775, merge=0/0, ticks=32543/58018, in_queue=90561, util=98.20% 00:16:04.474 nvme0n2: ios=4146/4231, merge=0/0, ticks=37001/41203, in_queue=78204, util=98.68% 00:16:04.474 nvme0n3: ios=3099/3319, merge=0/0, ticks=39088/63977, in_queue=103065, util=98.34% 00:16:04.474 nvme0n4: ios=3099/3135, merge=0/0, ticks=34765/54386, in_queue=89151, util=98.96% 00:16:04.474 18:30:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:04.474 [global] 00:16:04.474 thread=1 00:16:04.474 invalidate=1 00:16:04.474 rw=randwrite 00:16:04.474 time_based=1 00:16:04.474 runtime=1 00:16:04.474 ioengine=libaio 00:16:04.474 direct=1 00:16:04.474 bs=4096 00:16:04.474 iodepth=128 00:16:04.474 norandommap=0 00:16:04.474 numjobs=1 00:16:04.474 00:16:04.474 verify_dump=1 00:16:04.474 verify_backlog=512 00:16:04.474 verify_state_save=0 00:16:04.474 do_verify=1 00:16:04.474 verify=crc32c-intel 00:16:04.474 [job0] 00:16:04.474 filename=/dev/nvme0n1 00:16:04.474 [job1] 00:16:04.474 filename=/dev/nvme0n2 00:16:04.474 [job2] 00:16:04.474 filename=/dev/nvme0n3 00:16:04.474 [job3] 00:16:04.474 filename=/dev/nvme0n4 00:16:04.474 Could not set queue depth (nvme0n1) 00:16:04.474 Could not set queue depth (nvme0n2) 00:16:04.474 Could not set queue depth (nvme0n3) 00:16:04.474 Could not set queue depth (nvme0n4) 00:16:04.738 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:04.739 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:04.739 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:04.739 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:04.739 fio-3.35 00:16:04.739 Starting 4 threads 00:16:06.116 00:16:06.116 job0: (groupid=0, jobs=1): err= 0: pid=3903815: Mon Jul 15 18:30:51 2024 00:16:06.116 read: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec) 00:16:06.116 slat (nsec): min=1248, max=13081k, avg=90019.35, stdev=631870.53 00:16:06.116 clat (usec): min=3070, max=48720, avg=11027.68, stdev=5189.84 00:16:06.116 lat (usec): min=3075, max=48729, avg=11117.70, stdev=5239.82 00:16:06.116 clat percentiles (usec): 00:16:06.116 | 1.00th=[ 4228], 5.00th=[ 7177], 10.00th=[ 8291], 20.00th=[ 8979], 00:16:06.116 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10290], 00:16:06.116 | 70.00th=[10814], 80.00th=[11731], 90.00th=[14091], 95.00th=[16057], 00:16:06.116 | 99.00th=[42206], 99.50th=[44303], 99.90th=[47973], 99.95th=[48497], 00:16:06.116 | 99.99th=[48497] 00:16:06.116 write: IOPS=6269, BW=24.5MiB/s (25.7MB/s)(24.6MiB/1004msec); 0 zone resets 00:16:06.116 slat (nsec): min=1949, max=10200k, avg=63893.36, stdev=379756.97 00:16:06.116 clat (usec): min=1545, max=48688, avg=9476.75, stdev=3937.69 00:16:06.116 lat (usec): min=1558, max=48691, avg=9540.64, stdev=3958.75 00:16:06.116 clat percentiles (usec): 00:16:06.116 | 1.00th=[ 2900], 5.00th=[ 4621], 10.00th=[ 6390], 20.00th=[ 7701], 00:16:06.116 | 30.00th=[ 8717], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9634], 00:16:06.116 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10945], 95.00th=[12518], 00:16:06.116 | 99.00th=[32113], 99.50th=[33817], 99.90th=[40633], 99.95th=[44827], 00:16:06.116 | 99.99th=[48497] 00:16:06.116 bw ( KiB/s): min=24424, max=24920, per=32.81%, avg=24672.00, stdev=350.72, samples=2 00:16:06.116 iops : min= 6106, max= 6230, avg=6168.00, stdev=87.68, samples=2 00:16:06.116 lat (msec) : 2=0.06%, 4=2.11%, 10=63.81%, 20=31.57%, 50=2.45% 00:16:06.116 cpu : usr=5.48%, sys=5.68%, ctx=713, majf=0, minf=1 00:16:06.116 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:16:06.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.116 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:06.116 issued rwts: total=6144,6295,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.116 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:06.116 job1: (groupid=0, jobs=1): err= 0: pid=3903817: Mon Jul 15 18:30:51 2024 00:16:06.116 read: IOPS=5917, BW=23.1MiB/s (24.2MB/s)(23.2MiB/1005msec) 00:16:06.116 slat (nsec): min=1062, max=11785k, avg=81647.69, stdev=634049.82 00:16:06.116 clat (usec): min=1998, max=44191, avg=12156.59, stdev=5574.25 00:16:06.116 lat (usec): min=3371, max=44197, avg=12238.24, stdev=5616.14 00:16:06.116 clat percentiles (usec): 00:16:06.117 | 1.00th=[ 4293], 5.00th=[ 6259], 10.00th=[ 7504], 20.00th=[ 8848], 00:16:06.117 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[11076], 00:16:06.117 | 70.00th=[12649], 80.00th=[15139], 90.00th=[20579], 95.00th=[24511], 00:16:06.117 | 99.00th=[30278], 99.50th=[33162], 99.90th=[44303], 99.95th=[44303], 00:16:06.117 | 99.99th=[44303] 00:16:06.117 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 00:16:06.117 slat (nsec): min=1981, max=13241k, avg=62737.54, stdev=533369.20 00:16:06.117 clat (usec): min=253, max=26646, avg=8958.59, stdev=3413.15 00:16:06.117 lat (usec): min=282, max=26654, avg=9021.32, stdev=3449.76 00:16:06.117 clat percentiles (usec): 00:16:06.117 | 1.00th=[ 1631], 5.00th=[ 2868], 10.00th=[ 4883], 20.00th=[ 7242], 00:16:06.117 | 30.00th=[ 8160], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9241], 00:16:06.117 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[12518], 95.00th=[15795], 00:16:06.117 | 99.00th=[22414], 99.50th=[26346], 99.90th=[26608], 99.95th=[26608], 00:16:06.117 | 99.99th=[26608] 00:16:06.117 bw ( KiB/s): min=22568, max=26584, per=32.68%, avg=24576.00, stdev=2839.74, samples=2 00:16:06.117 iops : min= 5642, max= 6646, avg=6144.00, stdev=709.94, samples=2 00:16:06.117 lat (usec) : 500=0.02%, 750=0.14%, 1000=0.07% 00:16:06.117 lat (msec) : 2=0.76%, 4=3.55%, 10=60.12%, 20=29.32%, 50=6.02% 00:16:06.117 cpu : usr=3.78%, sys=7.07%, ctx=386, majf=0, minf=1 00:16:06.117 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:16:06.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:06.117 issued rwts: total=5947,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.117 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:06.117 job2: (groupid=0, jobs=1): err= 0: pid=3903818: Mon Jul 15 18:30:51 2024 00:16:06.117 read: IOPS=3105, BW=12.1MiB/s (12.7MB/s)(12.2MiB/1002msec) 00:16:06.117 slat (nsec): min=1281, max=13657k, avg=129633.54, stdev=842962.74 00:16:06.117 clat (usec): min=852, max=43382, avg=17821.65, stdev=7414.06 00:16:06.117 lat (usec): min=3448, max=43387, avg=17951.28, stdev=7456.09 00:16:06.117 clat percentiles (usec): 00:16:06.117 | 1.00th=[ 5997], 5.00th=[ 8848], 10.00th=[10945], 20.00th=[11994], 00:16:06.117 | 30.00th=[12911], 40.00th=[15270], 50.00th=[16581], 60.00th=[17171], 00:16:06.117 | 70.00th=[19792], 80.00th=[22414], 90.00th=[28967], 95.00th=[33817], 00:16:06.117 | 99.00th=[40633], 99.50th=[40633], 99.90th=[43254], 99.95th=[43254], 00:16:06.117 | 99.99th=[43254] 00:16:06.117 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:16:06.117 slat (nsec): min=1990, max=15877k, avg=160184.74, stdev=954903.54 00:16:06.117 clat (usec): min=5752, max=78069, avg=19758.90, stdev=11570.17 00:16:06.117 lat (usec): min=5761, max=78079, avg=19919.08, stdev=11623.25 00:16:06.117 clat percentiles (usec): 00:16:06.117 | 1.00th=[ 5866], 5.00th=[10028], 10.00th=[10421], 20.00th=[11600], 00:16:06.117 | 30.00th=[14222], 40.00th=[15139], 50.00th=[16450], 60.00th=[18482], 00:16:06.117 | 70.00th=[21627], 80.00th=[24773], 90.00th=[31589], 95.00th=[39584], 00:16:06.117 | 99.00th=[76022], 99.50th=[77071], 99.90th=[78119], 99.95th=[78119], 00:16:06.117 | 99.99th=[78119] 00:16:06.117 bw ( KiB/s): min=12288, max=15688, per=18.60%, avg=13988.00, stdev=2404.16, samples=2 00:16:06.117 iops : min= 3072, max= 3922, avg=3497.00, stdev=601.04, samples=2 00:16:06.117 lat (usec) : 1000=0.01% 00:16:06.117 lat (msec) : 4=0.12%, 10=5.54%, 20=61.78%, 50=31.00%, 100=1.54% 00:16:06.117 cpu : usr=2.70%, sys=3.30%, ctx=286, majf=0, minf=1 00:16:06.117 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:16:06.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:06.117 issued rwts: total=3112,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.117 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:06.117 job3: (groupid=0, jobs=1): err= 0: pid=3903819: Mon Jul 15 18:30:51 2024 00:16:06.117 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.5MiB/1043msec) 00:16:06.117 slat (nsec): min=1244, max=26020k, avg=147519.35, stdev=1050300.13 00:16:06.117 clat (usec): min=4336, max=83794, avg=20128.76, stdev=12069.12 00:16:06.117 lat (usec): min=4342, max=83798, avg=20276.28, stdev=12144.39 00:16:06.117 clat percentiles (usec): 00:16:06.117 | 1.00th=[ 5735], 5.00th=[10290], 10.00th=[11207], 20.00th=[12256], 00:16:06.117 | 30.00th=[13698], 40.00th=[15139], 50.00th=[16450], 60.00th=[18744], 00:16:06.117 | 70.00th=[20841], 80.00th=[25560], 90.00th=[31327], 95.00th=[44827], 00:16:06.117 | 99.00th=[67634], 99.50th=[78119], 99.90th=[83362], 99.95th=[83362], 00:16:06.117 | 99.99th=[83362] 00:16:06.117 write: IOPS=3436, BW=13.4MiB/s (14.1MB/s)(14.0MiB/1043msec); 0 zone resets 00:16:06.117 slat (nsec): min=1966, max=22270k, avg=139367.48, stdev=778522.52 00:16:06.117 clat (usec): min=937, max=46573, avg=18792.56, stdev=10141.62 00:16:06.117 lat (usec): min=945, max=46581, avg=18931.93, stdev=10212.69 00:16:06.117 clat percentiles (usec): 00:16:06.117 | 1.00th=[ 6128], 5.00th=[ 7504], 10.00th=[10421], 20.00th=[10945], 00:16:06.117 | 30.00th=[11338], 40.00th=[11863], 50.00th=[15401], 60.00th=[17695], 00:16:06.117 | 70.00th=[21103], 80.00th=[30016], 90.00th=[35390], 95.00th=[38536], 00:16:06.117 | 99.00th=[42206], 99.50th=[45351], 99.90th=[46400], 99.95th=[46400], 00:16:06.117 | 99.99th=[46400] 00:16:06.117 bw ( KiB/s): min=10320, max=18352, per=19.07%, avg=14336.00, stdev=5679.48, samples=2 00:16:06.117 iops : min= 2580, max= 4588, avg=3584.00, stdev=1419.87, samples=2 00:16:06.117 lat (usec) : 1000=0.15% 00:16:06.117 lat (msec) : 2=0.01%, 4=0.12%, 10=6.03%, 20=61.35%, 50=30.69% 00:16:06.117 lat (msec) : 100=1.65% 00:16:06.117 cpu : usr=2.98%, sys=2.78%, ctx=343, majf=0, minf=1 00:16:06.117 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:16:06.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:06.117 issued rwts: total=3200,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.117 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:06.117 00:16:06.117 Run status group 0 (all jobs): 00:16:06.117 READ: bw=68.9MiB/s (72.3MB/s), 12.0MiB/s-23.9MiB/s (12.6MB/s-25.1MB/s), io=71.9MiB (75.4MB), run=1002-1043msec 00:16:06.117 WRITE: bw=73.4MiB/s (77.0MB/s), 13.4MiB/s-24.5MiB/s (14.1MB/s-25.7MB/s), io=76.6MiB (80.3MB), run=1002-1043msec 00:16:06.117 00:16:06.117 Disk stats (read/write): 00:16:06.117 nvme0n1: ios=4657/5120, merge=0/0, ticks=50132/47744, in_queue=97876, util=81.96% 00:16:06.117 nvme0n2: ios=4525/4608, merge=0/0, ticks=39732/32051, in_queue=71783, util=97.12% 00:16:06.117 nvme0n3: ios=2584/2677, merge=0/0, ticks=20554/20180, in_queue=40734, util=97.51% 00:16:06.117 nvme0n4: ios=2913/3072, merge=0/0, ticks=27859/27296, in_queue=55155, util=97.79% 00:16:06.117 18:30:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:16:06.117 18:30:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3904048 00:16:06.117 18:30:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:16:06.117 18:30:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:06.117 [global] 00:16:06.117 thread=1 00:16:06.117 invalidate=1 00:16:06.117 rw=read 00:16:06.117 time_based=1 00:16:06.117 runtime=10 00:16:06.117 ioengine=libaio 00:16:06.117 direct=1 00:16:06.117 bs=4096 00:16:06.117 iodepth=1 00:16:06.117 norandommap=1 00:16:06.117 numjobs=1 00:16:06.117 00:16:06.117 [job0] 00:16:06.117 filename=/dev/nvme0n1 00:16:06.117 [job1] 00:16:06.117 filename=/dev/nvme0n2 00:16:06.117 [job2] 00:16:06.117 filename=/dev/nvme0n3 00:16:06.117 [job3] 00:16:06.117 filename=/dev/nvme0n4 00:16:06.117 Could not set queue depth (nvme0n1) 00:16:06.117 Could not set queue depth (nvme0n2) 00:16:06.117 Could not set queue depth (nvme0n3) 00:16:06.117 Could not set queue depth (nvme0n4) 00:16:06.377 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:06.377 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:06.377 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:06.377 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:06.377 fio-3.35 00:16:06.377 Starting 4 threads 00:16:09.666 18:30:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:09.666 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=1597440, buflen=4096 00:16:09.666 fio: pid=3904196, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:09.666 18:30:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:09.666 18:30:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:09.666 18:30:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:09.666 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=39972864, buflen=4096 00:16:09.666 fio: pid=3904195, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:09.666 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=33632256, buflen=4096 00:16:09.666 fio: pid=3904193, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:09.666 18:30:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:09.666 18:30:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:09.926 18:30:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:09.926 18:30:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:09.926 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=50221056, buflen=4096 00:16:09.926 fio: pid=3904194, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:09.926 00:16:09.926 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3904193: Mon Jul 15 18:30:55 2024 00:16:09.926 read: IOPS=2638, BW=10.3MiB/s (10.8MB/s)(32.1MiB/3112msec) 00:16:09.926 slat (usec): min=5, max=25007, avg=14.90, stdev=380.67 00:16:09.926 clat (usec): min=168, max=41323, avg=360.37, stdev=2295.95 00:16:09.926 lat (usec): min=174, max=41331, avg=375.26, stdev=2327.95 00:16:09.926 clat percentiles (usec): 00:16:09.926 | 1.00th=[ 178], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 204], 00:16:09.926 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 223], 00:16:09.926 | 70.00th=[ 231], 80.00th=[ 245], 90.00th=[ 260], 95.00th=[ 269], 00:16:09.926 | 99.00th=[ 424], 99.50th=[ 478], 99.90th=[41157], 99.95th=[41157], 00:16:09.926 | 99.99th=[41157] 00:16:09.926 bw ( KiB/s): min= 336, max=17768, per=28.48%, avg=10447.83, stdev=7924.78, samples=6 00:16:09.926 iops : min= 84, max= 4442, avg=2611.83, stdev=1981.12, samples=6 00:16:09.926 lat (usec) : 250=83.83%, 500=15.75%, 750=0.02%, 1000=0.01% 00:16:09.926 lat (msec) : 4=0.01%, 20=0.02%, 50=0.34% 00:16:09.926 cpu : usr=0.55%, sys=2.48%, ctx=8217, majf=0, minf=1 00:16:09.926 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:09.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.926 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.926 issued rwts: total=8212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:09.926 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:09.926 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3904194: Mon Jul 15 18:30:55 2024 00:16:09.926 read: IOPS=3672, BW=14.3MiB/s (15.0MB/s)(47.9MiB/3339msec) 00:16:09.926 slat (usec): min=5, max=10846, avg= 8.88, stdev=125.79 00:16:09.926 clat (usec): min=170, max=41922, avg=260.44, stdev=1304.86 00:16:09.926 lat (usec): min=177, max=49867, avg=269.31, stdev=1333.35 00:16:09.926 clat percentiles (usec): 00:16:09.926 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 196], 00:16:09.926 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 217], 00:16:09.926 | 70.00th=[ 225], 80.00th=[ 237], 90.00th=[ 253], 95.00th=[ 262], 00:16:09.926 | 99.00th=[ 310], 99.50th=[ 375], 99.90th=[25035], 99.95th=[41157], 00:16:09.926 | 99.99th=[41681] 00:16:09.926 bw ( KiB/s): min= 9668, max=19136, per=43.56%, avg=15979.33, stdev=3544.30, samples=6 00:16:09.926 iops : min= 2417, max= 4784, avg=3994.83, stdev=886.07, samples=6 00:16:09.926 lat (usec) : 250=88.49%, 500=11.25%, 750=0.12% 00:16:09.926 lat (msec) : 2=0.01%, 10=0.01%, 50=0.11% 00:16:09.926 cpu : usr=0.81%, sys=3.48%, ctx=12268, majf=0, minf=1 00:16:09.926 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:09.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.926 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.926 issued rwts: total=12262,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:09.926 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:09.926 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3904195: Mon Jul 15 18:30:55 2024 00:16:09.926 read: IOPS=3355, BW=13.1MiB/s (13.7MB/s)(38.1MiB/2909msec) 00:16:09.926 slat (nsec): min=4894, max=78280, avg=7048.83, stdev=1604.13 00:16:09.926 clat (usec): min=166, max=42354, avg=287.72, stdev=1464.61 00:16:09.926 lat (usec): min=172, max=42362, avg=294.76, stdev=1465.15 00:16:09.926 clat percentiles (usec): 00:16:09.926 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 202], 00:16:09.926 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 225], 60.00th=[ 241], 00:16:09.926 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 306], 00:16:09.926 | 99.00th=[ 433], 99.50th=[ 474], 99.90th=[40633], 99.95th=[41681], 00:16:09.926 | 99.99th=[42206] 00:16:09.926 bw ( KiB/s): min= 6192, max=18736, per=39.87%, avg=14627.20, stdev=5124.15, samples=5 00:16:09.926 iops : min= 1548, max= 4684, avg=3656.80, stdev=1281.04, samples=5 00:16:09.926 lat (usec) : 250=72.20%, 500=27.52%, 750=0.13% 00:16:09.926 lat (msec) : 50=0.13% 00:16:09.926 cpu : usr=0.76%, sys=3.09%, ctx=9762, majf=0, minf=1 00:16:09.926 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:09.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.926 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.926 issued rwts: total=9760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:09.926 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:09.926 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3904196: Mon Jul 15 18:30:55 2024 00:16:09.926 read: IOPS=144, BW=578KiB/s (592kB/s)(1560KiB/2697msec) 00:16:09.926 slat (nsec): min=6082, max=29783, avg=8645.34, stdev=3586.44 00:16:09.926 clat (usec): min=215, max=41847, avg=6880.63, stdev=14989.48 00:16:09.926 lat (usec): min=222, max=41876, avg=6889.28, stdev=14992.07 00:16:09.926 clat percentiles (usec): 00:16:09.926 | 1.00th=[ 225], 5.00th=[ 241], 10.00th=[ 251], 20.00th=[ 262], 00:16:09.926 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 293], 60.00th=[ 318], 00:16:09.926 | 70.00th=[ 379], 80.00th=[ 486], 90.00th=[41157], 95.00th=[41157], 00:16:09.926 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:16:09.926 | 99.99th=[41681] 00:16:09.926 bw ( KiB/s): min= 96, max= 208, per=0.32%, avg=118.40, stdev=50.09, samples=5 00:16:09.926 iops : min= 24, max= 52, avg=29.60, stdev=12.52, samples=5 00:16:09.926 lat (usec) : 250=9.97%, 500=71.36%, 750=2.30% 00:16:09.926 lat (msec) : 50=16.11% 00:16:09.926 cpu : usr=0.04%, sys=0.19%, ctx=394, majf=0, minf=2 00:16:09.926 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:09.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.926 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.926 issued rwts: total=391,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:09.926 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:09.926 00:16:09.926 Run status group 0 (all jobs): 00:16:09.926 READ: bw=35.8MiB/s (37.6MB/s), 578KiB/s-14.3MiB/s (592kB/s-15.0MB/s), io=120MiB (125MB), run=2697-3339msec 00:16:09.926 00:16:09.926 Disk stats (read/write): 00:16:09.926 nvme0n1: ios=8212/0, merge=0/0, ticks=2923/0, in_queue=2923, util=93.65% 00:16:09.926 nvme0n2: ios=12264/0, merge=0/0, ticks=3107/0, in_queue=3107, util=99.07% 00:16:09.926 nvme0n3: ios=9758/0, merge=0/0, ticks=2716/0, in_queue=2716, util=96.52% 00:16:09.926 nvme0n4: ios=191/0, merge=0/0, ticks=2757/0, in_queue=2757, util=98.89% 00:16:09.926 18:30:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:09.926 18:30:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:10.185 18:30:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:10.185 18:30:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:10.443 18:30:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:10.443 18:30:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:10.702 18:30:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:10.702 18:30:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:10.702 18:30:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:16:10.702 18:30:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3904048 00:16:10.702 18:30:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:16:10.702 18:30:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:10.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.961 18:30:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:10.961 18:30:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:16:10.961 18:30:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:10.961 18:30:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:10.961 18:30:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:10.961 18:30:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:10.961 18:30:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:16:10.961 18:30:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:10.961 18:30:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:10.961 nvmf hotplug test: fio failed as expected 00:16:10.961 18:30:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:11.221 18:30:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:11.221 18:30:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:11.221 18:30:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:11.221 18:30:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:11.221 18:30:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:16:11.221 18:30:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:11.221 18:30:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:16:11.221 18:30:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:11.221 18:30:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:16:11.221 18:30:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:11.221 18:30:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:11.221 rmmod nvme_tcp 00:16:11.221 rmmod nvme_fabrics 00:16:11.221 rmmod nvme_keyring 00:16:11.221 18:30:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:11.221 18:30:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:16:11.221 18:30:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:16:11.221 18:30:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3901298 ']' 00:16:11.221 18:30:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3901298 00:16:11.221 18:30:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 3901298 ']' 00:16:11.221 18:30:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 3901298 00:16:11.221 18:30:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:16:11.221 18:30:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:11.221 18:30:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3901298 00:16:11.221 18:30:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:11.221 18:30:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:11.221 18:30:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3901298' 00:16:11.221 killing process with pid 3901298 00:16:11.221 18:30:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 3901298 00:16:11.221 18:30:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 3901298 00:16:11.480 18:30:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:11.480 18:30:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:11.480 18:30:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:11.480 18:30:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:11.480 18:30:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:11.480 18:30:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.480 18:30:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.480 18:30:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.388 18:30:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:13.388 00:16:13.388 real 0m26.698s 00:16:13.388 user 1m47.077s 00:16:13.388 sys 0m8.392s 00:16:13.388 18:30:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:13.388 18:30:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.388 ************************************ 00:16:13.388 END TEST nvmf_fio_target 00:16:13.388 ************************************ 00:16:13.647 18:30:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:13.647 18:30:58 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:13.647 18:30:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:13.647 18:30:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:13.647 18:30:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:13.647 ************************************ 00:16:13.647 START TEST nvmf_bdevio 00:16:13.647 ************************************ 00:16:13.647 18:30:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:13.647 * Looking for test storage... 00:16:13.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:13.647 18:30:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:13.647 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:16:13.647 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.647 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.647 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.647 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.647 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.647 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.647 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.647 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.647 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.647 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.647 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:13.647 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:13.647 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.647 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.647 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:13.647 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:13.647 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:13.647 18:30:59 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.647 18:30:59 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.648 18:30:59 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.648 18:30:59 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.648 18:30:59 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.648 18:30:59 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.648 18:30:59 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:16:13.648 18:30:59 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.648 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:16:13.648 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:13.648 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:13.648 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:13.648 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.648 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.648 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:13.648 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:13.648 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:13.648 18:30:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:13.648 18:30:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:13.648 18:30:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:16:13.648 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:13.648 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:13.648 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:13.648 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:13.648 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:13.648 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.648 18:30:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:13.648 18:30:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.648 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:13.648 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:13.648 18:30:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:16:13.648 18:30:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:20.290 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:20.290 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:20.290 Found net devices under 0000:86:00.0: cvl_0_0 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:20.290 Found net devices under 0000:86:00.1: cvl_0_1 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:20.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:20.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:16:20.290 00:16:20.290 --- 10.0.0.2 ping statistics --- 00:16:20.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.290 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:20.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:20.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:16:20.290 00:16:20.290 --- 10.0.0.1 ping statistics --- 00:16:20.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.290 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3908444 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3908444 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 3908444 ']' 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:20.290 18:31:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.291 18:31:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:20.291 18:31:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:20.291 [2024-07-15 18:31:04.894843] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:16:20.291 [2024-07-15 18:31:04.894890] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:20.291 EAL: No free 2048 kB hugepages reported on node 1 00:16:20.291 [2024-07-15 18:31:04.965072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:20.291 [2024-07-15 18:31:05.045000] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:20.291 [2024-07-15 18:31:05.045032] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:20.291 [2024-07-15 18:31:05.045039] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:20.291 [2024-07-15 18:31:05.045045] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:20.291 [2024-07-15 18:31:05.045050] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:20.291 [2024-07-15 18:31:05.045106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:20.291 [2024-07-15 18:31:05.045215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:20.291 [2024-07-15 18:31:05.045322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:20.291 [2024-07-15 18:31:05.045324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:20.291 18:31:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:20.291 18:31:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:16:20.291 18:31:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:20.291 18:31:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:20.291 18:31:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:20.291 18:31:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:20.291 18:31:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:20.291 18:31:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.291 18:31:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:20.291 [2024-07-15 18:31:05.745232] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:20.291 18:31:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.291 18:31:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:20.291 18:31:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.291 18:31:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:20.291 Malloc0 00:16:20.291 18:31:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.291 18:31:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:20.291 18:31:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.291 18:31:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:20.291 18:31:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.291 18:31:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:20.291 18:31:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.291 18:31:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:20.291 18:31:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.291 18:31:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:20.291 18:31:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.291 18:31:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:20.291 [2024-07-15 18:31:05.788178] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:20.291 18:31:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.291 18:31:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:20.579 18:31:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:20.579 18:31:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:16:20.579 18:31:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:16:20.579 18:31:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:20.579 18:31:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:20.579 { 00:16:20.579 "params": { 00:16:20.579 "name": "Nvme$subsystem", 00:16:20.579 "trtype": "$TEST_TRANSPORT", 00:16:20.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:20.579 "adrfam": "ipv4", 00:16:20.579 "trsvcid": "$NVMF_PORT", 00:16:20.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:20.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:20.579 "hdgst": ${hdgst:-false}, 00:16:20.579 "ddgst": ${ddgst:-false} 00:16:20.579 }, 00:16:20.579 "method": "bdev_nvme_attach_controller" 00:16:20.579 } 00:16:20.579 EOF 00:16:20.579 )") 00:16:20.579 18:31:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:16:20.579 18:31:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:16:20.579 18:31:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:16:20.579 18:31:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:20.579 "params": { 00:16:20.579 "name": "Nvme1", 00:16:20.579 "trtype": "tcp", 00:16:20.579 "traddr": "10.0.0.2", 00:16:20.579 "adrfam": "ipv4", 00:16:20.579 "trsvcid": "4420", 00:16:20.579 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:20.579 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:20.579 "hdgst": false, 00:16:20.579 "ddgst": false 00:16:20.579 }, 00:16:20.579 "method": "bdev_nvme_attach_controller" 00:16:20.579 }' 00:16:20.579 [2024-07-15 18:31:05.834098] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:16:20.579 [2024-07-15 18:31:05.834143] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3908676 ] 00:16:20.579 EAL: No free 2048 kB hugepages reported on node 1 00:16:20.579 [2024-07-15 18:31:05.900532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:20.579 [2024-07-15 18:31:05.975102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:20.579 [2024-07-15 18:31:05.975206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.580 [2024-07-15 18:31:05.975207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:20.580 I/O targets: 00:16:20.580 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:20.580 00:16:20.580 00:16:20.580 CUnit - A unit testing framework for C - Version 2.1-3 00:16:20.580 http://cunit.sourceforge.net/ 00:16:20.580 00:16:20.580 00:16:20.580 Suite: bdevio tests on: Nvme1n1 00:16:20.838 Test: blockdev write read block ...passed 00:16:20.838 Test: blockdev write zeroes read block ...passed 00:16:20.838 Test: blockdev write zeroes read no split ...passed 00:16:20.838 Test: blockdev write zeroes read split ...passed 00:16:20.838 Test: blockdev write zeroes read split partial ...passed 00:16:20.838 Test: blockdev reset ...[2024-07-15 18:31:06.285401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:20.838 [2024-07-15 18:31:06.285460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8ff6d0 (9): Bad file descriptor 00:16:21.097 [2024-07-15 18:31:06.420147] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:21.097 passed 00:16:21.097 Test: blockdev write read 8 blocks ...passed 00:16:21.097 Test: blockdev write read size > 128k ...passed 00:16:21.097 Test: blockdev write read invalid size ...passed 00:16:21.097 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:21.097 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:21.097 Test: blockdev write read max offset ...passed 00:16:21.097 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:21.097 Test: blockdev writev readv 8 blocks ...passed 00:16:21.097 Test: blockdev writev readv 30 x 1block ...passed 00:16:21.097 Test: blockdev writev readv block ...passed 00:16:21.097 Test: blockdev writev readv size > 128k ...passed 00:16:21.097 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:21.097 Test: blockdev comparev and writev ...[2024-07-15 18:31:06.630235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:21.097 [2024-07-15 18:31:06.630262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:21.097 [2024-07-15 18:31:06.630276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:21.097 [2024-07-15 18:31:06.630283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:21.097 [2024-07-15 18:31:06.630515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:21.097 [2024-07-15 18:31:06.630526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:21.097 [2024-07-15 18:31:06.630537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:21.097 [2024-07-15 18:31:06.630543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:21.097 [2024-07-15 18:31:06.630778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:21.097 [2024-07-15 18:31:06.630787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:21.097 [2024-07-15 18:31:06.630798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:21.097 [2024-07-15 18:31:06.630805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:21.097 [2024-07-15 18:31:06.631045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:21.097 [2024-07-15 18:31:06.631054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:21.097 [2024-07-15 18:31:06.631065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:21.097 [2024-07-15 18:31:06.631072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:21.355 passed 00:16:21.355 Test: blockdev nvme passthru rw ...passed 00:16:21.355 Test: blockdev nvme passthru vendor specific ...[2024-07-15 18:31:06.713693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:21.355 [2024-07-15 18:31:06.713708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:21.355 [2024-07-15 18:31:06.713810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:21.355 [2024-07-15 18:31:06.713819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:21.355 [2024-07-15 18:31:06.713923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:21.355 [2024-07-15 18:31:06.713932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:21.355 [2024-07-15 18:31:06.714034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:21.355 [2024-07-15 18:31:06.714043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:21.355 passed 00:16:21.355 Test: blockdev nvme admin passthru ...passed 00:16:21.355 Test: blockdev copy ...passed 00:16:21.355 00:16:21.355 Run Summary: Type Total Ran Passed Failed Inactive 00:16:21.355 suites 1 1 n/a 0 0 00:16:21.355 tests 23 23 23 0 0 00:16:21.355 asserts 152 152 152 0 n/a 00:16:21.355 00:16:21.355 Elapsed time = 1.281 seconds 00:16:21.613 18:31:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:21.613 18:31:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.613 18:31:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:21.613 18:31:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.613 18:31:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:21.613 18:31:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:16:21.613 18:31:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:21.613 18:31:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:16:21.613 18:31:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:21.613 18:31:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:16:21.613 18:31:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:21.613 18:31:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:21.613 rmmod nvme_tcp 00:16:21.613 rmmod nvme_fabrics 00:16:21.613 rmmod nvme_keyring 00:16:21.613 18:31:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:21.613 18:31:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:16:21.613 18:31:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:16:21.613 18:31:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3908444 ']' 00:16:21.613 18:31:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3908444 00:16:21.613 18:31:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 3908444 ']' 00:16:21.613 18:31:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 3908444 00:16:21.613 18:31:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:16:21.613 18:31:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:21.613 18:31:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3908444 00:16:21.614 18:31:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:16:21.614 18:31:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:16:21.614 18:31:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3908444' 00:16:21.614 killing process with pid 3908444 00:16:21.614 18:31:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 3908444 00:16:21.614 18:31:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 3908444 00:16:21.872 18:31:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:21.872 18:31:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:21.872 18:31:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:21.872 18:31:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:21.872 18:31:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:21.872 18:31:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.872 18:31:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:21.872 18:31:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.841 18:31:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:23.841 00:16:23.841 real 0m10.345s 00:16:23.841 user 0m12.581s 00:16:23.841 sys 0m4.841s 00:16:23.841 18:31:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:23.841 18:31:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:23.841 ************************************ 00:16:23.841 END TEST nvmf_bdevio 00:16:23.841 ************************************ 00:16:23.841 18:31:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:23.841 18:31:09 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:23.841 18:31:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:23.841 18:31:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:23.841 18:31:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:24.098 ************************************ 00:16:24.098 START TEST nvmf_auth_target 00:16:24.098 ************************************ 00:16:24.098 18:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:24.098 * Looking for test storage... 00:16:24.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:24.098 18:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:24.098 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:24.099 18:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:30.663 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:30.663 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:30.663 Found net devices under 0000:86:00.0: cvl_0_0 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:30.663 Found net devices under 0000:86:00.1: cvl_0_1 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:30.663 18:31:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:30.663 18:31:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:30.663 18:31:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:30.663 18:31:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:30.663 18:31:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:30.663 18:31:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:30.663 18:31:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:30.663 18:31:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:30.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:16:30.663 00:16:30.663 --- 10.0.0.2 ping statistics --- 00:16:30.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.663 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:16:30.663 18:31:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:30.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:16:30.663 00:16:30.663 --- 10.0.0.1 ping statistics --- 00:16:30.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.663 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:16:30.663 18:31:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.663 18:31:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:16:30.663 18:31:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:30.663 18:31:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.663 18:31:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:30.663 18:31:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:30.663 18:31:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.663 18:31:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:30.663 18:31:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:30.663 18:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:30.663 18:31:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:30.663 18:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:30.663 18:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.663 18:31:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3912424 00:16:30.663 18:31:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:30.663 18:31:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3912424 00:16:30.663 18:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3912424 ']' 00:16:30.663 18:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.663 18:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:30.663 18:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.663 18:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:30.664 18:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3912452 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2685bb4a911b7a8a0b48bf371b1467f11c19bc81fd5f780b 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.YXt 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2685bb4a911b7a8a0b48bf371b1467f11c19bc81fd5f780b 0 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2685bb4a911b7a8a0b48bf371b1467f11c19bc81fd5f780b 0 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2685bb4a911b7a8a0b48bf371b1467f11c19bc81fd5f780b 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.YXt 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.YXt 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.YXt 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=99253b272bf7cc31a00a550330a86e43192008d8c5a3f41ac5af70c5b74a13dd 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.5Ru 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 99253b272bf7cc31a00a550330a86e43192008d8c5a3f41ac5af70c5b74a13dd 3 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 99253b272bf7cc31a00a550330a86e43192008d8c5a3f41ac5af70c5b74a13dd 3 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=99253b272bf7cc31a00a550330a86e43192008d8c5a3f41ac5af70c5b74a13dd 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:30.664 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:30.923 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.5Ru 00:16:30.923 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.5Ru 00:16:30.923 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.5Ru 00:16:30.923 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:30.923 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:30.923 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:30.923 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:30.923 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:30.923 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:30.923 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:30.923 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c348d62f683baa18c13a22ab1f6c315b 00:16:30.923 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:30.923 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.dCQ 00:16:30.923 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c348d62f683baa18c13a22ab1f6c315b 1 00:16:30.923 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c348d62f683baa18c13a22ab1f6c315b 1 00:16:30.923 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:30.923 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:30.923 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c348d62f683baa18c13a22ab1f6c315b 00:16:30.923 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:30.923 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:30.923 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.dCQ 00:16:30.923 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.dCQ 00:16:30.923 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.dCQ 00:16:30.923 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:30.923 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:30.923 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:30.923 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:30.923 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:30.923 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d2f4590c9d35cffbb6dff3ff9cadea18807bf785a5228977 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Hh1 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d2f4590c9d35cffbb6dff3ff9cadea18807bf785a5228977 2 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d2f4590c9d35cffbb6dff3ff9cadea18807bf785a5228977 2 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d2f4590c9d35cffbb6dff3ff9cadea18807bf785a5228977 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Hh1 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Hh1 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Hh1 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a63fd37ac90785c7f33e738f0077202e4ee8db7c85722a73 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.3WT 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a63fd37ac90785c7f33e738f0077202e4ee8db7c85722a73 2 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a63fd37ac90785c7f33e738f0077202e4ee8db7c85722a73 2 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a63fd37ac90785c7f33e738f0077202e4ee8db7c85722a73 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.3WT 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.3WT 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.3WT 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=125254652e939daaaf8fd8afae3bf0d6 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.wCB 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 125254652e939daaaf8fd8afae3bf0d6 1 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 125254652e939daaaf8fd8afae3bf0d6 1 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=125254652e939daaaf8fd8afae3bf0d6 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:30.924 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:31.182 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.wCB 00:16:31.182 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.wCB 00:16:31.182 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.wCB 00:16:31.182 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:31.182 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:31.182 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:31.182 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:31.182 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:31.182 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:31.182 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:31.182 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=19da6f485e962b06f8ddd0dab89b71f069c5faf4a4729f472ae82ef2949bfa72 00:16:31.182 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:31.182 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.6tg 00:16:31.182 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 19da6f485e962b06f8ddd0dab89b71f069c5faf4a4729f472ae82ef2949bfa72 3 00:16:31.182 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 19da6f485e962b06f8ddd0dab89b71f069c5faf4a4729f472ae82ef2949bfa72 3 00:16:31.182 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:31.182 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:31.182 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=19da6f485e962b06f8ddd0dab89b71f069c5faf4a4729f472ae82ef2949bfa72 00:16:31.182 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:31.182 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:31.182 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.6tg 00:16:31.182 18:31:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.6tg 00:16:31.183 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.6tg 00:16:31.183 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:31.183 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3912424 00:16:31.183 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3912424 ']' 00:16:31.183 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.183 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:31.183 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.183 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:31.183 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.442 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:31.442 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:31.442 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3912452 /var/tmp/host.sock 00:16:31.442 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3912452 ']' 00:16:31.442 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:16:31.442 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:31.442 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:31.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:31.442 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:31.442 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.442 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:31.442 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:31.442 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:31.442 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.442 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.442 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.442 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:31.442 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.YXt 00:16:31.442 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.442 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.442 18:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.442 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.YXt 00:16:31.442 18:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.YXt 00:16:31.701 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.5Ru ]] 00:16:31.701 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5Ru 00:16:31.701 18:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.701 18:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.701 18:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.701 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5Ru 00:16:31.701 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5Ru 00:16:31.960 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:31.960 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.dCQ 00:16:31.960 18:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.960 18:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.960 18:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.960 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.dCQ 00:16:31.960 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.dCQ 00:16:32.219 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Hh1 ]] 00:16:32.219 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Hh1 00:16:32.219 18:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.219 18:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.219 18:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.219 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Hh1 00:16:32.219 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Hh1 00:16:32.219 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:32.219 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.3WT 00:16:32.219 18:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.219 18:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.219 18:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.219 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.3WT 00:16:32.219 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.3WT 00:16:32.477 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.wCB ]] 00:16:32.477 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.wCB 00:16:32.477 18:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.477 18:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.477 18:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.477 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.wCB 00:16:32.477 18:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.wCB 00:16:32.736 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:32.736 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.6tg 00:16:32.736 18:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.736 18:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.736 18:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.736 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.6tg 00:16:32.736 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.6tg 00:16:32.736 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:32.736 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:32.736 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.736 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:32.736 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:32.736 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:32.994 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:32.995 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:32.995 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:32.995 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:32.995 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:32.995 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.995 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.995 18:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.995 18:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.995 18:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.995 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.995 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.253 00:16:33.253 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.253 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.253 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.511 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.511 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.511 18:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.511 18:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.511 18:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.511 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:33.511 { 00:16:33.511 "cntlid": 1, 00:16:33.511 "qid": 0, 00:16:33.511 "state": "enabled", 00:16:33.511 "thread": "nvmf_tgt_poll_group_000", 00:16:33.511 "listen_address": { 00:16:33.511 "trtype": "TCP", 00:16:33.511 "adrfam": "IPv4", 00:16:33.511 "traddr": "10.0.0.2", 00:16:33.511 "trsvcid": "4420" 00:16:33.511 }, 00:16:33.511 "peer_address": { 00:16:33.511 "trtype": "TCP", 00:16:33.511 "adrfam": "IPv4", 00:16:33.511 "traddr": "10.0.0.1", 00:16:33.511 "trsvcid": "44728" 00:16:33.511 }, 00:16:33.511 "auth": { 00:16:33.511 "state": "completed", 00:16:33.511 "digest": "sha256", 00:16:33.511 "dhgroup": "null" 00:16:33.511 } 00:16:33.511 } 00:16:33.511 ]' 00:16:33.511 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:33.511 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.511 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:33.511 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:33.511 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:33.511 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.511 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.511 18:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.770 18:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjY4NWJiNGE5MTFiN2E4YTBiNDhiZjM3MWIxNDY3ZjExYzE5YmM4MWZkNWY3ODBiVL/WcA==: --dhchap-ctrl-secret DHHC-1:03:OTkyNTNiMjcyYmY3Y2MzMWEwMGE1NTAzMzBhODZlNDMxOTIwMDhkOGM1YTNmNDFhYzVhZjcwYzViNzRhMTNkZJ0Cjc0=: 00:16:34.337 18:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.337 18:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:34.337 18:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.337 18:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.337 18:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.337 18:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:34.337 18:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:34.337 18:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:34.337 18:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:34.337 18:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:34.337 18:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:34.337 18:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:34.337 18:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:34.337 18:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.337 18:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.337 18:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.337 18:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.337 18:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.337 18:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.337 18:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.596 00:16:34.596 18:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:34.596 18:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:34.596 18:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.855 18:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.855 18:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.855 18:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.855 18:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.855 18:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.855 18:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.855 { 00:16:34.855 "cntlid": 3, 00:16:34.855 "qid": 0, 00:16:34.855 "state": "enabled", 00:16:34.855 "thread": "nvmf_tgt_poll_group_000", 00:16:34.855 "listen_address": { 00:16:34.855 "trtype": "TCP", 00:16:34.855 "adrfam": "IPv4", 00:16:34.855 "traddr": "10.0.0.2", 00:16:34.855 "trsvcid": "4420" 00:16:34.855 }, 00:16:34.855 "peer_address": { 00:16:34.855 "trtype": "TCP", 00:16:34.855 "adrfam": "IPv4", 00:16:34.855 "traddr": "10.0.0.1", 00:16:34.855 "trsvcid": "44746" 00:16:34.855 }, 00:16:34.855 "auth": { 00:16:34.855 "state": "completed", 00:16:34.855 "digest": "sha256", 00:16:34.855 "dhgroup": "null" 00:16:34.855 } 00:16:34.855 } 00:16:34.855 ]' 00:16:34.855 18:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.855 18:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.855 18:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.855 18:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:34.855 18:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:35.113 18:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.113 18:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.113 18:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.113 18:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzM0OGQ2MmY2ODNiYWExOGMxM2EyMmFiMWY2YzMxNWJQoz9+: --dhchap-ctrl-secret DHHC-1:02:ZDJmNDU5MGM5ZDM1Y2ZmYmI2ZGZmM2ZmOWNhZGVhMTg4MDdiZjc4NWE1MjI4OTc3SvDdrg==: 00:16:35.680 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.680 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:35.680 18:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.680 18:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.680 18:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.680 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.680 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:35.680 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:35.939 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:35.939 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.939 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:35.939 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:35.939 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:35.939 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.939 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.939 18:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.939 18:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.939 18:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.939 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.939 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.198 00:16:36.198 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:36.198 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:36.198 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.456 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.456 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.456 18:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.456 18:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.456 18:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.456 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.456 { 00:16:36.456 "cntlid": 5, 00:16:36.456 "qid": 0, 00:16:36.456 "state": "enabled", 00:16:36.456 "thread": "nvmf_tgt_poll_group_000", 00:16:36.456 "listen_address": { 00:16:36.456 "trtype": "TCP", 00:16:36.456 "adrfam": "IPv4", 00:16:36.456 "traddr": "10.0.0.2", 00:16:36.456 "trsvcid": "4420" 00:16:36.456 }, 00:16:36.456 "peer_address": { 00:16:36.456 "trtype": "TCP", 00:16:36.456 "adrfam": "IPv4", 00:16:36.456 "traddr": "10.0.0.1", 00:16:36.456 "trsvcid": "44772" 00:16:36.456 }, 00:16:36.456 "auth": { 00:16:36.456 "state": "completed", 00:16:36.456 "digest": "sha256", 00:16:36.456 "dhgroup": "null" 00:16:36.456 } 00:16:36.456 } 00:16:36.456 ]' 00:16:36.456 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.456 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.456 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.456 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:36.456 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.456 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.456 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.456 18:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.714 18:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTYzZmQzN2FjOTA3ODVjN2YzM2U3MzhmMDA3NzIwMmU0ZWU4ZGI3Yzg1NzIyYTczzM77jA==: --dhchap-ctrl-secret DHHC-1:01:MTI1MjU0NjUyZTkzOWRhYWFmOGZkOGFmYWUzYmYwZDblwPpd: 00:16:37.277 18:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.277 18:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:37.277 18:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.277 18:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.277 18:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.277 18:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.277 18:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:37.277 18:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:37.277 18:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:37.534 18:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.534 18:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:37.534 18:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:37.534 18:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:37.534 18:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.534 18:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:37.534 18:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.534 18:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.534 18:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.535 18:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:37.535 18:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:37.535 00:16:37.535 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.535 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.535 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.792 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.792 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.792 18:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.792 18:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.792 18:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.792 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.792 { 00:16:37.793 "cntlid": 7, 00:16:37.793 "qid": 0, 00:16:37.793 "state": "enabled", 00:16:37.793 "thread": "nvmf_tgt_poll_group_000", 00:16:37.793 "listen_address": { 00:16:37.793 "trtype": "TCP", 00:16:37.793 "adrfam": "IPv4", 00:16:37.793 "traddr": "10.0.0.2", 00:16:37.793 "trsvcid": "4420" 00:16:37.793 }, 00:16:37.793 "peer_address": { 00:16:37.793 "trtype": "TCP", 00:16:37.793 "adrfam": "IPv4", 00:16:37.793 "traddr": "10.0.0.1", 00:16:37.793 "trsvcid": "44812" 00:16:37.793 }, 00:16:37.793 "auth": { 00:16:37.793 "state": "completed", 00:16:37.793 "digest": "sha256", 00:16:37.793 "dhgroup": "null" 00:16:37.793 } 00:16:37.793 } 00:16:37.793 ]' 00:16:37.793 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:37.793 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.793 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.050 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:38.050 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.050 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.050 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.050 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.050 18:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTlkYTZmNDg1ZTk2MmIwNmY4ZGRkMGRhYjg5YjcxZjA2OWM1ZmFmNGE0NzI5ZjQ3MmFlODJlZjI5NDliZmE3MgO4Deg=: 00:16:38.616 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.616 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:38.616 18:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.616 18:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.616 18:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.616 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.616 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.616 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:38.616 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:38.875 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:38.875 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.875 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:38.875 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:38.875 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:38.875 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.875 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.875 18:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.875 18:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.875 18:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.875 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.875 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.133 00:16:39.133 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:39.133 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:39.133 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.391 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.391 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.391 18:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.391 18:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.391 18:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.391 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:39.391 { 00:16:39.391 "cntlid": 9, 00:16:39.391 "qid": 0, 00:16:39.391 "state": "enabled", 00:16:39.391 "thread": "nvmf_tgt_poll_group_000", 00:16:39.391 "listen_address": { 00:16:39.391 "trtype": "TCP", 00:16:39.391 "adrfam": "IPv4", 00:16:39.391 "traddr": "10.0.0.2", 00:16:39.391 "trsvcid": "4420" 00:16:39.391 }, 00:16:39.391 "peer_address": { 00:16:39.391 "trtype": "TCP", 00:16:39.391 "adrfam": "IPv4", 00:16:39.391 "traddr": "10.0.0.1", 00:16:39.391 "trsvcid": "33526" 00:16:39.391 }, 00:16:39.391 "auth": { 00:16:39.391 "state": "completed", 00:16:39.391 "digest": "sha256", 00:16:39.391 "dhgroup": "ffdhe2048" 00:16:39.391 } 00:16:39.391 } 00:16:39.391 ]' 00:16:39.391 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:39.391 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.391 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.391 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:39.391 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.391 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.391 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.391 18:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.650 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjY4NWJiNGE5MTFiN2E4YTBiNDhiZjM3MWIxNDY3ZjExYzE5YmM4MWZkNWY3ODBiVL/WcA==: --dhchap-ctrl-secret DHHC-1:03:OTkyNTNiMjcyYmY3Y2MzMWEwMGE1NTAzMzBhODZlNDMxOTIwMDhkOGM1YTNmNDFhYzVhZjcwYzViNzRhMTNkZJ0Cjc0=: 00:16:40.215 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.215 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:40.215 18:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.215 18:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.216 18:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.216 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.216 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:40.216 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:40.474 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:40.474 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.474 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:40.474 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:40.474 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:40.474 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.474 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.474 18:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.474 18:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.474 18:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.474 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.474 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.733 00:16:40.733 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.733 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.733 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.733 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.733 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.733 18:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.733 18:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.733 18:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.733 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.733 { 00:16:40.733 "cntlid": 11, 00:16:40.733 "qid": 0, 00:16:40.733 "state": "enabled", 00:16:40.733 "thread": "nvmf_tgt_poll_group_000", 00:16:40.733 "listen_address": { 00:16:40.733 "trtype": "TCP", 00:16:40.733 "adrfam": "IPv4", 00:16:40.733 "traddr": "10.0.0.2", 00:16:40.733 "trsvcid": "4420" 00:16:40.733 }, 00:16:40.733 "peer_address": { 00:16:40.733 "trtype": "TCP", 00:16:40.733 "adrfam": "IPv4", 00:16:40.733 "traddr": "10.0.0.1", 00:16:40.733 "trsvcid": "33558" 00:16:40.733 }, 00:16:40.733 "auth": { 00:16:40.733 "state": "completed", 00:16:40.733 "digest": "sha256", 00:16:40.733 "dhgroup": "ffdhe2048" 00:16:40.733 } 00:16:40.733 } 00:16:40.733 ]' 00:16:40.733 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.733 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.990 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.990 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:40.990 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.990 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.990 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.990 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.249 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzM0OGQ2MmY2ODNiYWExOGMxM2EyMmFiMWY2YzMxNWJQoz9+: --dhchap-ctrl-secret DHHC-1:02:ZDJmNDU5MGM5ZDM1Y2ZmYmI2ZGZmM2ZmOWNhZGVhMTg4MDdiZjc4NWE1MjI4OTc3SvDdrg==: 00:16:41.815 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.815 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:41.815 18:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.815 18:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.815 18:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.815 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.815 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:41.815 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:41.815 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:41.815 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.815 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:41.815 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:41.815 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:41.815 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.815 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.815 18:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.815 18:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.815 18:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.815 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.815 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.073 00:16:42.073 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:42.073 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:42.073 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.331 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.331 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.331 18:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.331 18:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.331 18:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.331 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.331 { 00:16:42.331 "cntlid": 13, 00:16:42.331 "qid": 0, 00:16:42.331 "state": "enabled", 00:16:42.331 "thread": "nvmf_tgt_poll_group_000", 00:16:42.331 "listen_address": { 00:16:42.331 "trtype": "TCP", 00:16:42.331 "adrfam": "IPv4", 00:16:42.331 "traddr": "10.0.0.2", 00:16:42.331 "trsvcid": "4420" 00:16:42.331 }, 00:16:42.331 "peer_address": { 00:16:42.331 "trtype": "TCP", 00:16:42.331 "adrfam": "IPv4", 00:16:42.331 "traddr": "10.0.0.1", 00:16:42.331 "trsvcid": "33582" 00:16:42.331 }, 00:16:42.331 "auth": { 00:16:42.331 "state": "completed", 00:16:42.331 "digest": "sha256", 00:16:42.331 "dhgroup": "ffdhe2048" 00:16:42.331 } 00:16:42.331 } 00:16:42.331 ]' 00:16:42.331 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.331 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.331 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.331 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:42.331 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.589 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.589 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.589 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.590 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTYzZmQzN2FjOTA3ODVjN2YzM2U3MzhmMDA3NzIwMmU0ZWU4ZGI3Yzg1NzIyYTczzM77jA==: --dhchap-ctrl-secret DHHC-1:01:MTI1MjU0NjUyZTkzOWRhYWFmOGZkOGFmYWUzYmYwZDblwPpd: 00:16:43.156 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.156 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:43.156 18:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.156 18:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.156 18:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.156 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.156 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:43.156 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:43.414 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:43.414 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.414 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:43.414 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:43.414 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:43.414 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.414 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:43.414 18:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.414 18:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.414 18:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.414 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:43.414 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:43.673 00:16:43.673 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.673 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.673 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.933 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.933 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.933 18:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.933 18:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.933 18:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.933 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.933 { 00:16:43.933 "cntlid": 15, 00:16:43.933 "qid": 0, 00:16:43.933 "state": "enabled", 00:16:43.933 "thread": "nvmf_tgt_poll_group_000", 00:16:43.933 "listen_address": { 00:16:43.933 "trtype": "TCP", 00:16:43.933 "adrfam": "IPv4", 00:16:43.933 "traddr": "10.0.0.2", 00:16:43.933 "trsvcid": "4420" 00:16:43.933 }, 00:16:43.933 "peer_address": { 00:16:43.933 "trtype": "TCP", 00:16:43.933 "adrfam": "IPv4", 00:16:43.933 "traddr": "10.0.0.1", 00:16:43.933 "trsvcid": "33602" 00:16:43.933 }, 00:16:43.933 "auth": { 00:16:43.933 "state": "completed", 00:16:43.933 "digest": "sha256", 00:16:43.933 "dhgroup": "ffdhe2048" 00:16:43.933 } 00:16:43.933 } 00:16:43.933 ]' 00:16:43.933 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.933 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.933 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.933 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:43.933 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.933 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.933 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.933 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.192 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTlkYTZmNDg1ZTk2MmIwNmY4ZGRkMGRhYjg5YjcxZjA2OWM1ZmFmNGE0NzI5ZjQ3MmFlODJlZjI5NDliZmE3MgO4Deg=: 00:16:44.759 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.759 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:44.759 18:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.760 18:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.760 18:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.760 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:44.760 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.760 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:44.760 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:44.760 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:44.760 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.760 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:44.760 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:44.760 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:44.760 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.760 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.760 18:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.760 18:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.760 18:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.760 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.760 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.018 00:16:45.018 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.018 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.018 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.277 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.277 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.277 18:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.277 18:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.277 18:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.277 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.277 { 00:16:45.277 "cntlid": 17, 00:16:45.277 "qid": 0, 00:16:45.277 "state": "enabled", 00:16:45.277 "thread": "nvmf_tgt_poll_group_000", 00:16:45.277 "listen_address": { 00:16:45.277 "trtype": "TCP", 00:16:45.277 "adrfam": "IPv4", 00:16:45.277 "traddr": "10.0.0.2", 00:16:45.277 "trsvcid": "4420" 00:16:45.277 }, 00:16:45.277 "peer_address": { 00:16:45.277 "trtype": "TCP", 00:16:45.277 "adrfam": "IPv4", 00:16:45.277 "traddr": "10.0.0.1", 00:16:45.277 "trsvcid": "33616" 00:16:45.277 }, 00:16:45.277 "auth": { 00:16:45.277 "state": "completed", 00:16:45.277 "digest": "sha256", 00:16:45.277 "dhgroup": "ffdhe3072" 00:16:45.277 } 00:16:45.277 } 00:16:45.277 ]' 00:16:45.277 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.277 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.277 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.277 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:45.277 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.537 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.537 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.537 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.537 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjY4NWJiNGE5MTFiN2E4YTBiNDhiZjM3MWIxNDY3ZjExYzE5YmM4MWZkNWY3ODBiVL/WcA==: --dhchap-ctrl-secret DHHC-1:03:OTkyNTNiMjcyYmY3Y2MzMWEwMGE1NTAzMzBhODZlNDMxOTIwMDhkOGM1YTNmNDFhYzVhZjcwYzViNzRhMTNkZJ0Cjc0=: 00:16:46.104 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.104 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:46.104 18:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.104 18:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.104 18:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.104 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.104 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:46.104 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:46.363 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:46.363 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.363 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:46.363 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:46.363 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:46.363 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.363 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.363 18:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.363 18:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.363 18:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.363 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.363 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.622 00:16:46.622 18:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.622 18:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.622 18:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.881 18:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.881 18:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.881 18:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.881 18:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.881 18:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.881 18:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.881 { 00:16:46.881 "cntlid": 19, 00:16:46.881 "qid": 0, 00:16:46.881 "state": "enabled", 00:16:46.881 "thread": "nvmf_tgt_poll_group_000", 00:16:46.881 "listen_address": { 00:16:46.881 "trtype": "TCP", 00:16:46.881 "adrfam": "IPv4", 00:16:46.881 "traddr": "10.0.0.2", 00:16:46.881 "trsvcid": "4420" 00:16:46.881 }, 00:16:46.881 "peer_address": { 00:16:46.881 "trtype": "TCP", 00:16:46.881 "adrfam": "IPv4", 00:16:46.881 "traddr": "10.0.0.1", 00:16:46.881 "trsvcid": "33634" 00:16:46.881 }, 00:16:46.881 "auth": { 00:16:46.881 "state": "completed", 00:16:46.881 "digest": "sha256", 00:16:46.881 "dhgroup": "ffdhe3072" 00:16:46.881 } 00:16:46.881 } 00:16:46.881 ]' 00:16:46.881 18:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.881 18:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.881 18:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:46.881 18:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:46.881 18:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:46.881 18:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.881 18:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.881 18:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.140 18:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzM0OGQ2MmY2ODNiYWExOGMxM2EyMmFiMWY2YzMxNWJQoz9+: --dhchap-ctrl-secret DHHC-1:02:ZDJmNDU5MGM5ZDM1Y2ZmYmI2ZGZmM2ZmOWNhZGVhMTg4MDdiZjc4NWE1MjI4OTc3SvDdrg==: 00:16:47.708 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.708 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:47.708 18:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.708 18:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.708 18:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.708 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.708 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:47.708 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:47.708 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:47.708 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.708 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:47.708 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:47.708 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:47.708 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.708 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.708 18:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.708 18:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.708 18:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.709 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.709 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.967 00:16:48.225 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:48.225 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:48.225 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.225 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.225 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.225 18:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.225 18:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.225 18:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.225 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:48.225 { 00:16:48.225 "cntlid": 21, 00:16:48.225 "qid": 0, 00:16:48.225 "state": "enabled", 00:16:48.225 "thread": "nvmf_tgt_poll_group_000", 00:16:48.226 "listen_address": { 00:16:48.226 "trtype": "TCP", 00:16:48.226 "adrfam": "IPv4", 00:16:48.226 "traddr": "10.0.0.2", 00:16:48.226 "trsvcid": "4420" 00:16:48.226 }, 00:16:48.226 "peer_address": { 00:16:48.226 "trtype": "TCP", 00:16:48.226 "adrfam": "IPv4", 00:16:48.226 "traddr": "10.0.0.1", 00:16:48.226 "trsvcid": "33668" 00:16:48.226 }, 00:16:48.226 "auth": { 00:16:48.226 "state": "completed", 00:16:48.226 "digest": "sha256", 00:16:48.226 "dhgroup": "ffdhe3072" 00:16:48.226 } 00:16:48.226 } 00:16:48.226 ]' 00:16:48.226 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:48.226 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:48.226 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:48.484 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:48.484 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:48.484 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.484 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.484 18:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.484 18:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTYzZmQzN2FjOTA3ODVjN2YzM2U3MzhmMDA3NzIwMmU0ZWU4ZGI3Yzg1NzIyYTczzM77jA==: --dhchap-ctrl-secret DHHC-1:01:MTI1MjU0NjUyZTkzOWRhYWFmOGZkOGFmYWUzYmYwZDblwPpd: 00:16:49.050 18:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.050 18:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:49.050 18:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.050 18:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.050 18:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.050 18:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.050 18:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:49.050 18:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:49.309 18:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:49.309 18:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.309 18:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:49.309 18:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:49.309 18:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:49.309 18:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.309 18:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:49.309 18:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.309 18:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.309 18:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.309 18:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.309 18:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.568 00:16:49.568 18:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.568 18:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.568 18:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.827 18:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.827 18:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.827 18:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.827 18:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.827 18:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.827 18:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.827 { 00:16:49.827 "cntlid": 23, 00:16:49.827 "qid": 0, 00:16:49.827 "state": "enabled", 00:16:49.827 "thread": "nvmf_tgt_poll_group_000", 00:16:49.827 "listen_address": { 00:16:49.827 "trtype": "TCP", 00:16:49.827 "adrfam": "IPv4", 00:16:49.827 "traddr": "10.0.0.2", 00:16:49.827 "trsvcid": "4420" 00:16:49.827 }, 00:16:49.827 "peer_address": { 00:16:49.827 "trtype": "TCP", 00:16:49.827 "adrfam": "IPv4", 00:16:49.827 "traddr": "10.0.0.1", 00:16:49.827 "trsvcid": "43278" 00:16:49.827 }, 00:16:49.827 "auth": { 00:16:49.827 "state": "completed", 00:16:49.827 "digest": "sha256", 00:16:49.827 "dhgroup": "ffdhe3072" 00:16:49.827 } 00:16:49.827 } 00:16:49.827 ]' 00:16:49.827 18:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.827 18:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.827 18:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.827 18:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:49.827 18:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.827 18:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.827 18:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.827 18:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.085 18:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTlkYTZmNDg1ZTk2MmIwNmY4ZGRkMGRhYjg5YjcxZjA2OWM1ZmFmNGE0NzI5ZjQ3MmFlODJlZjI5NDliZmE3MgO4Deg=: 00:16:50.653 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.653 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:50.653 18:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.653 18:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.653 18:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.653 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:50.653 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.653 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:50.653 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:50.912 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:50.912 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.912 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:50.912 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:50.912 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:50.912 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.912 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.912 18:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.912 18:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.912 18:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.912 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.912 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.171 00:16:51.171 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.171 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.171 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.171 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.430 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.430 18:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.430 18:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.430 18:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.430 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.430 { 00:16:51.430 "cntlid": 25, 00:16:51.430 "qid": 0, 00:16:51.430 "state": "enabled", 00:16:51.430 "thread": "nvmf_tgt_poll_group_000", 00:16:51.430 "listen_address": { 00:16:51.430 "trtype": "TCP", 00:16:51.430 "adrfam": "IPv4", 00:16:51.430 "traddr": "10.0.0.2", 00:16:51.430 "trsvcid": "4420" 00:16:51.430 }, 00:16:51.430 "peer_address": { 00:16:51.430 "trtype": "TCP", 00:16:51.430 "adrfam": "IPv4", 00:16:51.430 "traddr": "10.0.0.1", 00:16:51.430 "trsvcid": "43302" 00:16:51.430 }, 00:16:51.430 "auth": { 00:16:51.430 "state": "completed", 00:16:51.430 "digest": "sha256", 00:16:51.430 "dhgroup": "ffdhe4096" 00:16:51.430 } 00:16:51.430 } 00:16:51.430 ]' 00:16:51.430 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.430 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.430 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.430 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:51.430 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.430 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.430 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.430 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.688 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjY4NWJiNGE5MTFiN2E4YTBiNDhiZjM3MWIxNDY3ZjExYzE5YmM4MWZkNWY3ODBiVL/WcA==: --dhchap-ctrl-secret DHHC-1:03:OTkyNTNiMjcyYmY3Y2MzMWEwMGE1NTAzMzBhODZlNDMxOTIwMDhkOGM1YTNmNDFhYzVhZjcwYzViNzRhMTNkZJ0Cjc0=: 00:16:52.256 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.256 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:52.256 18:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.256 18:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.256 18:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.256 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.256 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:52.256 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:52.256 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:52.515 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.515 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:52.515 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:52.515 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:52.515 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.515 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.515 18:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.515 18:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.515 18:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.515 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.515 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.515 00:16:52.773 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:52.773 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:52.773 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.773 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.773 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.773 18:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.773 18:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.773 18:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.773 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:52.773 { 00:16:52.773 "cntlid": 27, 00:16:52.773 "qid": 0, 00:16:52.773 "state": "enabled", 00:16:52.773 "thread": "nvmf_tgt_poll_group_000", 00:16:52.773 "listen_address": { 00:16:52.773 "trtype": "TCP", 00:16:52.773 "adrfam": "IPv4", 00:16:52.773 "traddr": "10.0.0.2", 00:16:52.773 "trsvcid": "4420" 00:16:52.773 }, 00:16:52.773 "peer_address": { 00:16:52.773 "trtype": "TCP", 00:16:52.773 "adrfam": "IPv4", 00:16:52.773 "traddr": "10.0.0.1", 00:16:52.773 "trsvcid": "43332" 00:16:52.773 }, 00:16:52.773 "auth": { 00:16:52.773 "state": "completed", 00:16:52.773 "digest": "sha256", 00:16:52.773 "dhgroup": "ffdhe4096" 00:16:52.773 } 00:16:52.773 } 00:16:52.773 ]' 00:16:52.773 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:52.773 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.773 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.032 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:53.032 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.032 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.032 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.032 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.290 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzM0OGQ2MmY2ODNiYWExOGMxM2EyMmFiMWY2YzMxNWJQoz9+: --dhchap-ctrl-secret DHHC-1:02:ZDJmNDU5MGM5ZDM1Y2ZmYmI2ZGZmM2ZmOWNhZGVhMTg4MDdiZjc4NWE1MjI4OTc3SvDdrg==: 00:16:53.856 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.856 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:53.856 18:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.856 18:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.856 18:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.856 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:53.856 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:53.856 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:53.856 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:53.856 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:53.856 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:53.856 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:53.856 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:53.856 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.856 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.856 18:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.856 18:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.856 18:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.856 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.856 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.114 00:16:54.114 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:54.114 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:54.114 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.371 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.371 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.371 18:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.371 18:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.371 18:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.371 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.371 { 00:16:54.371 "cntlid": 29, 00:16:54.371 "qid": 0, 00:16:54.372 "state": "enabled", 00:16:54.372 "thread": "nvmf_tgt_poll_group_000", 00:16:54.372 "listen_address": { 00:16:54.372 "trtype": "TCP", 00:16:54.372 "adrfam": "IPv4", 00:16:54.372 "traddr": "10.0.0.2", 00:16:54.372 "trsvcid": "4420" 00:16:54.372 }, 00:16:54.372 "peer_address": { 00:16:54.372 "trtype": "TCP", 00:16:54.372 "adrfam": "IPv4", 00:16:54.372 "traddr": "10.0.0.1", 00:16:54.372 "trsvcid": "43364" 00:16:54.372 }, 00:16:54.372 "auth": { 00:16:54.372 "state": "completed", 00:16:54.372 "digest": "sha256", 00:16:54.372 "dhgroup": "ffdhe4096" 00:16:54.372 } 00:16:54.372 } 00:16:54.372 ]' 00:16:54.372 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.372 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.372 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.372 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:54.372 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.630 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.630 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.630 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.630 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTYzZmQzN2FjOTA3ODVjN2YzM2U3MzhmMDA3NzIwMmU0ZWU4ZGI3Yzg1NzIyYTczzM77jA==: --dhchap-ctrl-secret DHHC-1:01:MTI1MjU0NjUyZTkzOWRhYWFmOGZkOGFmYWUzYmYwZDblwPpd: 00:16:55.197 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.197 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:55.197 18:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.197 18:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.197 18:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.197 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.197 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:55.197 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:55.455 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:55.455 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.455 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:55.455 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:55.455 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:55.455 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.455 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:55.455 18:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.455 18:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.455 18:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.455 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.455 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.712 00:16:55.712 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.712 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.712 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.970 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.970 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.970 18:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.970 18:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.970 18:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.970 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.970 { 00:16:55.970 "cntlid": 31, 00:16:55.970 "qid": 0, 00:16:55.970 "state": "enabled", 00:16:55.971 "thread": "nvmf_tgt_poll_group_000", 00:16:55.971 "listen_address": { 00:16:55.971 "trtype": "TCP", 00:16:55.971 "adrfam": "IPv4", 00:16:55.971 "traddr": "10.0.0.2", 00:16:55.971 "trsvcid": "4420" 00:16:55.971 }, 00:16:55.971 "peer_address": { 00:16:55.971 "trtype": "TCP", 00:16:55.971 "adrfam": "IPv4", 00:16:55.971 "traddr": "10.0.0.1", 00:16:55.971 "trsvcid": "43384" 00:16:55.971 }, 00:16:55.971 "auth": { 00:16:55.971 "state": "completed", 00:16:55.971 "digest": "sha256", 00:16:55.971 "dhgroup": "ffdhe4096" 00:16:55.971 } 00:16:55.971 } 00:16:55.971 ]' 00:16:55.971 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.971 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.971 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.971 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:55.971 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.971 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.971 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.971 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.229 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTlkYTZmNDg1ZTk2MmIwNmY4ZGRkMGRhYjg5YjcxZjA2OWM1ZmFmNGE0NzI5ZjQ3MmFlODJlZjI5NDliZmE3MgO4Deg=: 00:16:56.795 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.795 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:56.795 18:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.795 18:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.795 18:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.795 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.795 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:56.795 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:56.795 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:57.054 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:57.054 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.054 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:57.054 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:57.054 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:57.054 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.054 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.054 18:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.054 18:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.054 18:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.054 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.054 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.312 00:16:57.312 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:57.312 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:57.312 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.571 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.571 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.571 18:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.571 18:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.571 18:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.571 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.571 { 00:16:57.571 "cntlid": 33, 00:16:57.571 "qid": 0, 00:16:57.571 "state": "enabled", 00:16:57.571 "thread": "nvmf_tgt_poll_group_000", 00:16:57.571 "listen_address": { 00:16:57.571 "trtype": "TCP", 00:16:57.571 "adrfam": "IPv4", 00:16:57.571 "traddr": "10.0.0.2", 00:16:57.571 "trsvcid": "4420" 00:16:57.571 }, 00:16:57.571 "peer_address": { 00:16:57.571 "trtype": "TCP", 00:16:57.571 "adrfam": "IPv4", 00:16:57.571 "traddr": "10.0.0.1", 00:16:57.571 "trsvcid": "43428" 00:16:57.571 }, 00:16:57.571 "auth": { 00:16:57.571 "state": "completed", 00:16:57.571 "digest": "sha256", 00:16:57.571 "dhgroup": "ffdhe6144" 00:16:57.571 } 00:16:57.571 } 00:16:57.571 ]' 00:16:57.571 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:57.571 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.571 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:57.571 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:57.571 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:57.571 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.571 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.571 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.829 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjY4NWJiNGE5MTFiN2E4YTBiNDhiZjM3MWIxNDY3ZjExYzE5YmM4MWZkNWY3ODBiVL/WcA==: --dhchap-ctrl-secret DHHC-1:03:OTkyNTNiMjcyYmY3Y2MzMWEwMGE1NTAzMzBhODZlNDMxOTIwMDhkOGM1YTNmNDFhYzVhZjcwYzViNzRhMTNkZJ0Cjc0=: 00:16:58.396 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.396 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:58.396 18:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.396 18:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.396 18:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.396 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:58.396 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:58.396 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:58.655 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:58.655 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:58.655 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:58.655 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:58.655 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:58.655 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.655 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.655 18:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.655 18:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.655 18:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.655 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.655 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.914 00:16:58.914 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.914 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.914 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.172 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.172 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.172 18:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.172 18:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.172 18:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.172 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.172 { 00:16:59.172 "cntlid": 35, 00:16:59.172 "qid": 0, 00:16:59.172 "state": "enabled", 00:16:59.172 "thread": "nvmf_tgt_poll_group_000", 00:16:59.172 "listen_address": { 00:16:59.172 "trtype": "TCP", 00:16:59.172 "adrfam": "IPv4", 00:16:59.172 "traddr": "10.0.0.2", 00:16:59.172 "trsvcid": "4420" 00:16:59.172 }, 00:16:59.172 "peer_address": { 00:16:59.172 "trtype": "TCP", 00:16:59.172 "adrfam": "IPv4", 00:16:59.172 "traddr": "10.0.0.1", 00:16:59.172 "trsvcid": "59032" 00:16:59.172 }, 00:16:59.172 "auth": { 00:16:59.172 "state": "completed", 00:16:59.172 "digest": "sha256", 00:16:59.172 "dhgroup": "ffdhe6144" 00:16:59.172 } 00:16:59.172 } 00:16:59.172 ]' 00:16:59.172 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.172 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.172 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:59.172 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:59.172 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:59.172 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.172 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.172 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.431 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzM0OGQ2MmY2ODNiYWExOGMxM2EyMmFiMWY2YzMxNWJQoz9+: --dhchap-ctrl-secret DHHC-1:02:ZDJmNDU5MGM5ZDM1Y2ZmYmI2ZGZmM2ZmOWNhZGVhMTg4MDdiZjc4NWE1MjI4OTc3SvDdrg==: 00:16:59.999 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.999 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:59.999 18:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.999 18:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.999 18:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.999 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.999 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:59.999 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:59.999 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:59.999 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.999 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:59.999 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:59.999 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:59.999 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.999 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.999 18:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.999 18:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.999 18:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.999 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.999 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.565 00:17:00.565 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:00.565 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.565 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.565 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.565 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.565 18:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.565 18:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.565 18:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.565 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.565 { 00:17:00.565 "cntlid": 37, 00:17:00.565 "qid": 0, 00:17:00.565 "state": "enabled", 00:17:00.565 "thread": "nvmf_tgt_poll_group_000", 00:17:00.565 "listen_address": { 00:17:00.565 "trtype": "TCP", 00:17:00.565 "adrfam": "IPv4", 00:17:00.565 "traddr": "10.0.0.2", 00:17:00.565 "trsvcid": "4420" 00:17:00.565 }, 00:17:00.565 "peer_address": { 00:17:00.565 "trtype": "TCP", 00:17:00.565 "adrfam": "IPv4", 00:17:00.565 "traddr": "10.0.0.1", 00:17:00.565 "trsvcid": "59058" 00:17:00.565 }, 00:17:00.565 "auth": { 00:17:00.565 "state": "completed", 00:17:00.565 "digest": "sha256", 00:17:00.565 "dhgroup": "ffdhe6144" 00:17:00.565 } 00:17:00.565 } 00:17:00.565 ]' 00:17:00.565 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.565 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.565 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.822 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:00.822 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.822 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.822 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.822 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.822 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTYzZmQzN2FjOTA3ODVjN2YzM2U3MzhmMDA3NzIwMmU0ZWU4ZGI3Yzg1NzIyYTczzM77jA==: --dhchap-ctrl-secret DHHC-1:01:MTI1MjU0NjUyZTkzOWRhYWFmOGZkOGFmYWUzYmYwZDblwPpd: 00:17:01.387 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.387 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:01.387 18:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.387 18:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.387 18:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.387 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:01.387 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:01.387 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:01.686 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:17:01.686 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.686 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:01.686 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:01.686 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:01.686 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.686 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:01.686 18:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.686 18:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.686 18:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.686 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:01.686 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:01.944 00:17:01.944 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.944 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.944 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.201 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.201 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.202 18:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.202 18:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.202 18:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.202 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.202 { 00:17:02.202 "cntlid": 39, 00:17:02.202 "qid": 0, 00:17:02.202 "state": "enabled", 00:17:02.202 "thread": "nvmf_tgt_poll_group_000", 00:17:02.202 "listen_address": { 00:17:02.202 "trtype": "TCP", 00:17:02.202 "adrfam": "IPv4", 00:17:02.202 "traddr": "10.0.0.2", 00:17:02.202 "trsvcid": "4420" 00:17:02.202 }, 00:17:02.202 "peer_address": { 00:17:02.202 "trtype": "TCP", 00:17:02.202 "adrfam": "IPv4", 00:17:02.202 "traddr": "10.0.0.1", 00:17:02.202 "trsvcid": "59086" 00:17:02.202 }, 00:17:02.202 "auth": { 00:17:02.202 "state": "completed", 00:17:02.202 "digest": "sha256", 00:17:02.202 "dhgroup": "ffdhe6144" 00:17:02.202 } 00:17:02.202 } 00:17:02.202 ]' 00:17:02.202 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.202 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.202 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.202 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:02.202 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.202 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.202 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.202 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.459 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTlkYTZmNDg1ZTk2MmIwNmY4ZGRkMGRhYjg5YjcxZjA2OWM1ZmFmNGE0NzI5ZjQ3MmFlODJlZjI5NDliZmE3MgO4Deg=: 00:17:03.032 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.032 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:03.032 18:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.032 18:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.032 18:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.032 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:03.032 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.032 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:03.033 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:03.308 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:17:03.308 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.308 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:03.309 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:03.309 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:03.309 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.309 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.309 18:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.309 18:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.309 18:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.309 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.309 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.583 00:17:03.849 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:03.849 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:03.849 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.849 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.849 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.849 18:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.849 18:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.849 18:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.849 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:03.849 { 00:17:03.849 "cntlid": 41, 00:17:03.849 "qid": 0, 00:17:03.849 "state": "enabled", 00:17:03.849 "thread": "nvmf_tgt_poll_group_000", 00:17:03.849 "listen_address": { 00:17:03.849 "trtype": "TCP", 00:17:03.849 "adrfam": "IPv4", 00:17:03.849 "traddr": "10.0.0.2", 00:17:03.849 "trsvcid": "4420" 00:17:03.849 }, 00:17:03.849 "peer_address": { 00:17:03.849 "trtype": "TCP", 00:17:03.849 "adrfam": "IPv4", 00:17:03.849 "traddr": "10.0.0.1", 00:17:03.849 "trsvcid": "59118" 00:17:03.849 }, 00:17:03.849 "auth": { 00:17:03.849 "state": "completed", 00:17:03.849 "digest": "sha256", 00:17:03.849 "dhgroup": "ffdhe8192" 00:17:03.849 } 00:17:03.849 } 00:17:03.849 ]' 00:17:03.849 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:03.849 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:03.849 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.107 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:04.107 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.107 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.107 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.107 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.107 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjY4NWJiNGE5MTFiN2E4YTBiNDhiZjM3MWIxNDY3ZjExYzE5YmM4MWZkNWY3ODBiVL/WcA==: --dhchap-ctrl-secret DHHC-1:03:OTkyNTNiMjcyYmY3Y2MzMWEwMGE1NTAzMzBhODZlNDMxOTIwMDhkOGM1YTNmNDFhYzVhZjcwYzViNzRhMTNkZJ0Cjc0=: 00:17:04.675 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.675 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:04.675 18:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.675 18:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.933 18:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.933 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:04.933 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:04.933 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:04.933 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:17:04.933 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:04.933 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:04.933 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:04.933 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:04.933 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.933 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.933 18:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.933 18:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.933 18:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.933 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.933 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.501 00:17:05.501 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.501 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:05.501 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.786 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.786 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.786 18:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.786 18:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.786 18:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.786 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:05.786 { 00:17:05.786 "cntlid": 43, 00:17:05.786 "qid": 0, 00:17:05.786 "state": "enabled", 00:17:05.786 "thread": "nvmf_tgt_poll_group_000", 00:17:05.786 "listen_address": { 00:17:05.786 "trtype": "TCP", 00:17:05.786 "adrfam": "IPv4", 00:17:05.786 "traddr": "10.0.0.2", 00:17:05.786 "trsvcid": "4420" 00:17:05.786 }, 00:17:05.786 "peer_address": { 00:17:05.786 "trtype": "TCP", 00:17:05.786 "adrfam": "IPv4", 00:17:05.786 "traddr": "10.0.0.1", 00:17:05.786 "trsvcid": "59148" 00:17:05.786 }, 00:17:05.786 "auth": { 00:17:05.786 "state": "completed", 00:17:05.786 "digest": "sha256", 00:17:05.786 "dhgroup": "ffdhe8192" 00:17:05.786 } 00:17:05.786 } 00:17:05.786 ]' 00:17:05.786 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:05.786 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:05.786 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:05.786 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:05.786 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:05.786 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.786 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.786 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.045 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzM0OGQ2MmY2ODNiYWExOGMxM2EyMmFiMWY2YzMxNWJQoz9+: --dhchap-ctrl-secret DHHC-1:02:ZDJmNDU5MGM5ZDM1Y2ZmYmI2ZGZmM2ZmOWNhZGVhMTg4MDdiZjc4NWE1MjI4OTc3SvDdrg==: 00:17:06.612 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.612 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:06.612 18:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.612 18:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.612 18:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.612 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.612 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:06.612 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:06.870 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:17:06.870 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.870 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:06.870 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:06.870 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:06.870 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.870 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.870 18:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.870 18:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.870 18:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.870 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.870 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.436 00:17:07.436 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.436 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.436 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.436 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.436 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.436 18:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.436 18:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.436 18:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.436 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:07.436 { 00:17:07.436 "cntlid": 45, 00:17:07.436 "qid": 0, 00:17:07.436 "state": "enabled", 00:17:07.436 "thread": "nvmf_tgt_poll_group_000", 00:17:07.436 "listen_address": { 00:17:07.436 "trtype": "TCP", 00:17:07.436 "adrfam": "IPv4", 00:17:07.436 "traddr": "10.0.0.2", 00:17:07.436 "trsvcid": "4420" 00:17:07.436 }, 00:17:07.436 "peer_address": { 00:17:07.436 "trtype": "TCP", 00:17:07.436 "adrfam": "IPv4", 00:17:07.436 "traddr": "10.0.0.1", 00:17:07.436 "trsvcid": "59174" 00:17:07.436 }, 00:17:07.436 "auth": { 00:17:07.436 "state": "completed", 00:17:07.436 "digest": "sha256", 00:17:07.436 "dhgroup": "ffdhe8192" 00:17:07.436 } 00:17:07.436 } 00:17:07.436 ]' 00:17:07.436 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.436 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:07.436 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.694 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:07.694 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.694 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.694 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.694 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.694 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTYzZmQzN2FjOTA3ODVjN2YzM2U3MzhmMDA3NzIwMmU0ZWU4ZGI3Yzg1NzIyYTczzM77jA==: --dhchap-ctrl-secret DHHC-1:01:MTI1MjU0NjUyZTkzOWRhYWFmOGZkOGFmYWUzYmYwZDblwPpd: 00:17:08.265 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.265 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:08.265 18:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.265 18:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.265 18:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.265 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.265 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:08.265 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:08.523 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:08.524 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.524 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:08.524 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:08.524 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:08.524 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.524 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:08.524 18:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.524 18:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.524 18:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.524 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:08.524 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:09.090 00:17:09.090 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:09.090 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:09.090 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.090 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.090 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.090 18:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.090 18:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.349 18:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.349 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.349 { 00:17:09.349 "cntlid": 47, 00:17:09.349 "qid": 0, 00:17:09.349 "state": "enabled", 00:17:09.349 "thread": "nvmf_tgt_poll_group_000", 00:17:09.349 "listen_address": { 00:17:09.349 "trtype": "TCP", 00:17:09.349 "adrfam": "IPv4", 00:17:09.349 "traddr": "10.0.0.2", 00:17:09.349 "trsvcid": "4420" 00:17:09.349 }, 00:17:09.349 "peer_address": { 00:17:09.349 "trtype": "TCP", 00:17:09.349 "adrfam": "IPv4", 00:17:09.349 "traddr": "10.0.0.1", 00:17:09.349 "trsvcid": "56940" 00:17:09.349 }, 00:17:09.349 "auth": { 00:17:09.349 "state": "completed", 00:17:09.349 "digest": "sha256", 00:17:09.349 "dhgroup": "ffdhe8192" 00:17:09.349 } 00:17:09.349 } 00:17:09.349 ]' 00:17:09.349 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.349 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:09.349 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.349 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:09.349 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.349 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.349 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.349 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.608 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTlkYTZmNDg1ZTk2MmIwNmY4ZGRkMGRhYjg5YjcxZjA2OWM1ZmFmNGE0NzI5ZjQ3MmFlODJlZjI5NDliZmE3MgO4Deg=: 00:17:10.173 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.173 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:10.173 18:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.173 18:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.173 18:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.173 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:10.173 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:10.173 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:10.173 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:10.173 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:10.173 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:10.173 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.173 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:10.173 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:10.173 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:10.432 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.432 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.432 18:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.432 18:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.432 18:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.432 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.432 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.432 00:17:10.432 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:10.432 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.432 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.690 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.690 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.690 18:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.690 18:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.690 18:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.690 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.690 { 00:17:10.690 "cntlid": 49, 00:17:10.690 "qid": 0, 00:17:10.690 "state": "enabled", 00:17:10.690 "thread": "nvmf_tgt_poll_group_000", 00:17:10.690 "listen_address": { 00:17:10.690 "trtype": "TCP", 00:17:10.690 "adrfam": "IPv4", 00:17:10.690 "traddr": "10.0.0.2", 00:17:10.690 "trsvcid": "4420" 00:17:10.690 }, 00:17:10.690 "peer_address": { 00:17:10.690 "trtype": "TCP", 00:17:10.690 "adrfam": "IPv4", 00:17:10.690 "traddr": "10.0.0.1", 00:17:10.690 "trsvcid": "56968" 00:17:10.690 }, 00:17:10.690 "auth": { 00:17:10.690 "state": "completed", 00:17:10.690 "digest": "sha384", 00:17:10.690 "dhgroup": "null" 00:17:10.690 } 00:17:10.690 } 00:17:10.690 ]' 00:17:10.690 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:10.691 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.691 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:10.949 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:10.949 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:10.949 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.949 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.949 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.949 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjY4NWJiNGE5MTFiN2E4YTBiNDhiZjM3MWIxNDY3ZjExYzE5YmM4MWZkNWY3ODBiVL/WcA==: --dhchap-ctrl-secret DHHC-1:03:OTkyNTNiMjcyYmY3Y2MzMWEwMGE1NTAzMzBhODZlNDMxOTIwMDhkOGM1YTNmNDFhYzVhZjcwYzViNzRhMTNkZJ0Cjc0=: 00:17:11.515 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.515 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:11.515 18:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.515 18:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.774 18:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.774 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.774 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:11.774 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:11.774 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:17:11.774 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:11.774 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:11.774 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:11.774 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:11.774 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.774 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.774 18:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.774 18:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.774 18:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.774 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.774 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.032 00:17:12.032 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:12.032 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:12.032 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.290 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.290 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.290 18:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.290 18:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.290 18:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.290 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:12.290 { 00:17:12.290 "cntlid": 51, 00:17:12.290 "qid": 0, 00:17:12.290 "state": "enabled", 00:17:12.290 "thread": "nvmf_tgt_poll_group_000", 00:17:12.290 "listen_address": { 00:17:12.290 "trtype": "TCP", 00:17:12.290 "adrfam": "IPv4", 00:17:12.290 "traddr": "10.0.0.2", 00:17:12.290 "trsvcid": "4420" 00:17:12.290 }, 00:17:12.290 "peer_address": { 00:17:12.290 "trtype": "TCP", 00:17:12.290 "adrfam": "IPv4", 00:17:12.290 "traddr": "10.0.0.1", 00:17:12.290 "trsvcid": "57004" 00:17:12.290 }, 00:17:12.290 "auth": { 00:17:12.290 "state": "completed", 00:17:12.290 "digest": "sha384", 00:17:12.290 "dhgroup": "null" 00:17:12.290 } 00:17:12.290 } 00:17:12.290 ]' 00:17:12.290 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.290 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.290 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:12.290 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:12.290 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.290 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.290 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.290 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.548 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzM0OGQ2MmY2ODNiYWExOGMxM2EyMmFiMWY2YzMxNWJQoz9+: --dhchap-ctrl-secret DHHC-1:02:ZDJmNDU5MGM5ZDM1Y2ZmYmI2ZGZmM2ZmOWNhZGVhMTg4MDdiZjc4NWE1MjI4OTc3SvDdrg==: 00:17:13.115 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.115 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:13.115 18:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.115 18:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.115 18:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.115 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:13.115 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:13.115 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:13.374 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:17:13.374 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.374 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:13.374 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:13.374 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:13.374 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.374 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.374 18:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.374 18:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.374 18:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.374 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.374 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.632 00:17:13.632 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.632 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.632 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.632 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.632 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.632 18:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.632 18:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.632 18:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.632 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.632 { 00:17:13.632 "cntlid": 53, 00:17:13.632 "qid": 0, 00:17:13.632 "state": "enabled", 00:17:13.632 "thread": "nvmf_tgt_poll_group_000", 00:17:13.632 "listen_address": { 00:17:13.632 "trtype": "TCP", 00:17:13.632 "adrfam": "IPv4", 00:17:13.632 "traddr": "10.0.0.2", 00:17:13.632 "trsvcid": "4420" 00:17:13.632 }, 00:17:13.632 "peer_address": { 00:17:13.632 "trtype": "TCP", 00:17:13.632 "adrfam": "IPv4", 00:17:13.632 "traddr": "10.0.0.1", 00:17:13.632 "trsvcid": "57030" 00:17:13.632 }, 00:17:13.632 "auth": { 00:17:13.632 "state": "completed", 00:17:13.632 "digest": "sha384", 00:17:13.632 "dhgroup": "null" 00:17:13.632 } 00:17:13.632 } 00:17:13.632 ]' 00:17:13.632 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.890 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.890 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.890 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:13.890 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.890 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.890 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.890 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.150 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTYzZmQzN2FjOTA3ODVjN2YzM2U3MzhmMDA3NzIwMmU0ZWU4ZGI3Yzg1NzIyYTczzM77jA==: --dhchap-ctrl-secret DHHC-1:01:MTI1MjU0NjUyZTkzOWRhYWFmOGZkOGFmYWUzYmYwZDblwPpd: 00:17:14.715 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.715 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:14.715 18:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.715 18:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.715 18:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.715 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:14.715 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:14.715 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:14.715 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:17:14.715 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:14.715 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:14.715 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:14.715 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:14.715 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.715 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:14.715 18:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.715 18:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.715 18:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.715 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:14.715 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:14.972 00:17:14.972 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:14.972 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.972 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.247 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.247 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.247 18:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.247 18:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.247 18:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.247 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:15.247 { 00:17:15.247 "cntlid": 55, 00:17:15.247 "qid": 0, 00:17:15.247 "state": "enabled", 00:17:15.247 "thread": "nvmf_tgt_poll_group_000", 00:17:15.247 "listen_address": { 00:17:15.247 "trtype": "TCP", 00:17:15.247 "adrfam": "IPv4", 00:17:15.247 "traddr": "10.0.0.2", 00:17:15.247 "trsvcid": "4420" 00:17:15.247 }, 00:17:15.247 "peer_address": { 00:17:15.247 "trtype": "TCP", 00:17:15.247 "adrfam": "IPv4", 00:17:15.247 "traddr": "10.0.0.1", 00:17:15.247 "trsvcid": "57062" 00:17:15.247 }, 00:17:15.247 "auth": { 00:17:15.247 "state": "completed", 00:17:15.247 "digest": "sha384", 00:17:15.247 "dhgroup": "null" 00:17:15.247 } 00:17:15.247 } 00:17:15.247 ]' 00:17:15.247 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:15.247 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.247 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:15.247 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:15.247 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:15.247 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.247 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.247 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.504 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTlkYTZmNDg1ZTk2MmIwNmY4ZGRkMGRhYjg5YjcxZjA2OWM1ZmFmNGE0NzI5ZjQ3MmFlODJlZjI5NDliZmE3MgO4Deg=: 00:17:16.070 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.070 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:16.070 18:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.070 18:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.070 18:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.070 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:16.070 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:16.070 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:16.070 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:16.328 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:17:16.329 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:16.329 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:16.329 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:16.329 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:16.329 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.329 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.329 18:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.329 18:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.329 18:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.329 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.329 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.329 00:17:16.329 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.329 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.329 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.587 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.587 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.587 18:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.587 18:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.587 18:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.587 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.587 { 00:17:16.587 "cntlid": 57, 00:17:16.587 "qid": 0, 00:17:16.587 "state": "enabled", 00:17:16.587 "thread": "nvmf_tgt_poll_group_000", 00:17:16.587 "listen_address": { 00:17:16.587 "trtype": "TCP", 00:17:16.587 "adrfam": "IPv4", 00:17:16.587 "traddr": "10.0.0.2", 00:17:16.587 "trsvcid": "4420" 00:17:16.587 }, 00:17:16.587 "peer_address": { 00:17:16.587 "trtype": "TCP", 00:17:16.587 "adrfam": "IPv4", 00:17:16.587 "traddr": "10.0.0.1", 00:17:16.587 "trsvcid": "57092" 00:17:16.587 }, 00:17:16.587 "auth": { 00:17:16.587 "state": "completed", 00:17:16.587 "digest": "sha384", 00:17:16.587 "dhgroup": "ffdhe2048" 00:17:16.587 } 00:17:16.587 } 00:17:16.587 ]' 00:17:16.587 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.587 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.587 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.587 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:16.588 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.846 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.846 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.846 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.846 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjY4NWJiNGE5MTFiN2E4YTBiNDhiZjM3MWIxNDY3ZjExYzE5YmM4MWZkNWY3ODBiVL/WcA==: --dhchap-ctrl-secret DHHC-1:03:OTkyNTNiMjcyYmY3Y2MzMWEwMGE1NTAzMzBhODZlNDMxOTIwMDhkOGM1YTNmNDFhYzVhZjcwYzViNzRhMTNkZJ0Cjc0=: 00:17:17.413 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.413 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:17.413 18:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.413 18:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.413 18:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.413 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.413 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:17.413 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:17.671 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:17:17.671 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.671 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:17.671 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:17.671 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:17.671 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.671 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.671 18:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.671 18:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.671 18:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.671 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.671 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.928 00:17:17.928 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:17.928 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:17.928 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.185 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.185 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.185 18:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.185 18:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.185 18:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.185 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.185 { 00:17:18.185 "cntlid": 59, 00:17:18.185 "qid": 0, 00:17:18.185 "state": "enabled", 00:17:18.185 "thread": "nvmf_tgt_poll_group_000", 00:17:18.185 "listen_address": { 00:17:18.185 "trtype": "TCP", 00:17:18.185 "adrfam": "IPv4", 00:17:18.185 "traddr": "10.0.0.2", 00:17:18.185 "trsvcid": "4420" 00:17:18.185 }, 00:17:18.185 "peer_address": { 00:17:18.185 "trtype": "TCP", 00:17:18.185 "adrfam": "IPv4", 00:17:18.185 "traddr": "10.0.0.1", 00:17:18.185 "trsvcid": "57134" 00:17:18.185 }, 00:17:18.185 "auth": { 00:17:18.185 "state": "completed", 00:17:18.185 "digest": "sha384", 00:17:18.185 "dhgroup": "ffdhe2048" 00:17:18.185 } 00:17:18.185 } 00:17:18.185 ]' 00:17:18.185 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.185 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.185 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.186 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:18.186 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.186 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.186 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.186 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.444 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzM0OGQ2MmY2ODNiYWExOGMxM2EyMmFiMWY2YzMxNWJQoz9+: --dhchap-ctrl-secret DHHC-1:02:ZDJmNDU5MGM5ZDM1Y2ZmYmI2ZGZmM2ZmOWNhZGVhMTg4MDdiZjc4NWE1MjI4OTc3SvDdrg==: 00:17:19.010 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.010 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:19.010 18:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.010 18:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.010 18:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.010 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.010 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:19.010 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:19.268 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:17:19.268 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.268 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:19.268 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:19.268 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:19.268 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.268 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.268 18:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.268 18:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.268 18:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.268 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.268 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.268 00:17:19.268 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:19.268 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:19.268 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.527 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.527 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.527 18:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.527 18:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.527 18:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.527 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:19.527 { 00:17:19.527 "cntlid": 61, 00:17:19.527 "qid": 0, 00:17:19.527 "state": "enabled", 00:17:19.527 "thread": "nvmf_tgt_poll_group_000", 00:17:19.527 "listen_address": { 00:17:19.527 "trtype": "TCP", 00:17:19.527 "adrfam": "IPv4", 00:17:19.527 "traddr": "10.0.0.2", 00:17:19.527 "trsvcid": "4420" 00:17:19.527 }, 00:17:19.527 "peer_address": { 00:17:19.527 "trtype": "TCP", 00:17:19.527 "adrfam": "IPv4", 00:17:19.527 "traddr": "10.0.0.1", 00:17:19.527 "trsvcid": "34242" 00:17:19.527 }, 00:17:19.527 "auth": { 00:17:19.527 "state": "completed", 00:17:19.527 "digest": "sha384", 00:17:19.527 "dhgroup": "ffdhe2048" 00:17:19.527 } 00:17:19.527 } 00:17:19.527 ]' 00:17:19.527 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.527 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.527 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.785 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:19.785 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.785 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.785 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.785 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.785 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTYzZmQzN2FjOTA3ODVjN2YzM2U3MzhmMDA3NzIwMmU0ZWU4ZGI3Yzg1NzIyYTczzM77jA==: --dhchap-ctrl-secret DHHC-1:01:MTI1MjU0NjUyZTkzOWRhYWFmOGZkOGFmYWUzYmYwZDblwPpd: 00:17:20.351 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.351 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:20.351 18:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.351 18:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.351 18:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.351 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:20.351 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:20.351 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:20.608 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:17:20.608 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.608 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:20.608 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:20.608 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:20.608 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.608 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:20.608 18:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.608 18:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.608 18:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.608 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:20.608 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:20.865 00:17:20.865 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.865 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.865 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.123 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.123 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.123 18:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.123 18:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.123 18:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.123 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:21.123 { 00:17:21.123 "cntlid": 63, 00:17:21.123 "qid": 0, 00:17:21.123 "state": "enabled", 00:17:21.123 "thread": "nvmf_tgt_poll_group_000", 00:17:21.123 "listen_address": { 00:17:21.123 "trtype": "TCP", 00:17:21.123 "adrfam": "IPv4", 00:17:21.123 "traddr": "10.0.0.2", 00:17:21.123 "trsvcid": "4420" 00:17:21.123 }, 00:17:21.123 "peer_address": { 00:17:21.123 "trtype": "TCP", 00:17:21.123 "adrfam": "IPv4", 00:17:21.123 "traddr": "10.0.0.1", 00:17:21.123 "trsvcid": "34270" 00:17:21.123 }, 00:17:21.123 "auth": { 00:17:21.123 "state": "completed", 00:17:21.123 "digest": "sha384", 00:17:21.123 "dhgroup": "ffdhe2048" 00:17:21.123 } 00:17:21.123 } 00:17:21.123 ]' 00:17:21.123 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:21.123 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.123 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:21.123 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:21.123 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.123 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.123 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.123 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.381 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTlkYTZmNDg1ZTk2MmIwNmY4ZGRkMGRhYjg5YjcxZjA2OWM1ZmFmNGE0NzI5ZjQ3MmFlODJlZjI5NDliZmE3MgO4Deg=: 00:17:21.984 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.984 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:21.984 18:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.984 18:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.984 18:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.984 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.984 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.984 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:21.984 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:21.984 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:17:21.984 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.984 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:21.984 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:21.984 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:21.984 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.984 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.984 18:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.984 18:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.249 18:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.249 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.249 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.249 00:17:22.249 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.249 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.249 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.517 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.517 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.517 18:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.517 18:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.517 18:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.517 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.517 { 00:17:22.517 "cntlid": 65, 00:17:22.517 "qid": 0, 00:17:22.517 "state": "enabled", 00:17:22.517 "thread": "nvmf_tgt_poll_group_000", 00:17:22.517 "listen_address": { 00:17:22.517 "trtype": "TCP", 00:17:22.517 "adrfam": "IPv4", 00:17:22.517 "traddr": "10.0.0.2", 00:17:22.517 "trsvcid": "4420" 00:17:22.517 }, 00:17:22.517 "peer_address": { 00:17:22.517 "trtype": "TCP", 00:17:22.517 "adrfam": "IPv4", 00:17:22.517 "traddr": "10.0.0.1", 00:17:22.517 "trsvcid": "34298" 00:17:22.517 }, 00:17:22.517 "auth": { 00:17:22.517 "state": "completed", 00:17:22.517 "digest": "sha384", 00:17:22.517 "dhgroup": "ffdhe3072" 00:17:22.517 } 00:17:22.517 } 00:17:22.517 ]' 00:17:22.517 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.517 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.517 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.517 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:22.517 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.780 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.780 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.780 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.780 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjY4NWJiNGE5MTFiN2E4YTBiNDhiZjM3MWIxNDY3ZjExYzE5YmM4MWZkNWY3ODBiVL/WcA==: --dhchap-ctrl-secret DHHC-1:03:OTkyNTNiMjcyYmY3Y2MzMWEwMGE1NTAzMzBhODZlNDMxOTIwMDhkOGM1YTNmNDFhYzVhZjcwYzViNzRhMTNkZJ0Cjc0=: 00:17:23.346 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.346 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:23.346 18:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.346 18:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.346 18:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.346 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.346 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:23.346 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:23.604 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:23.604 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.604 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:23.604 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:23.604 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:23.604 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.604 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.604 18:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.604 18:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.604 18:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.604 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.605 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.863 00:17:23.863 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.863 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.863 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.122 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.122 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.122 18:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.122 18:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.122 18:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.122 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.122 { 00:17:24.122 "cntlid": 67, 00:17:24.122 "qid": 0, 00:17:24.122 "state": "enabled", 00:17:24.122 "thread": "nvmf_tgt_poll_group_000", 00:17:24.122 "listen_address": { 00:17:24.122 "trtype": "TCP", 00:17:24.122 "adrfam": "IPv4", 00:17:24.122 "traddr": "10.0.0.2", 00:17:24.122 "trsvcid": "4420" 00:17:24.122 }, 00:17:24.122 "peer_address": { 00:17:24.122 "trtype": "TCP", 00:17:24.122 "adrfam": "IPv4", 00:17:24.122 "traddr": "10.0.0.1", 00:17:24.122 "trsvcid": "34328" 00:17:24.122 }, 00:17:24.122 "auth": { 00:17:24.122 "state": "completed", 00:17:24.122 "digest": "sha384", 00:17:24.122 "dhgroup": "ffdhe3072" 00:17:24.122 } 00:17:24.122 } 00:17:24.122 ]' 00:17:24.122 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.122 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.122 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.122 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:24.122 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.122 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.122 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.122 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.380 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzM0OGQ2MmY2ODNiYWExOGMxM2EyMmFiMWY2YzMxNWJQoz9+: --dhchap-ctrl-secret DHHC-1:02:ZDJmNDU5MGM5ZDM1Y2ZmYmI2ZGZmM2ZmOWNhZGVhMTg4MDdiZjc4NWE1MjI4OTc3SvDdrg==: 00:17:24.947 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.947 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:24.947 18:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.947 18:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.947 18:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.947 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.947 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:24.947 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:24.947 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:24.947 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.947 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:24.947 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:24.947 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:24.947 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.947 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.947 18:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.947 18:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.947 18:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.947 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.947 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.206 00:17:25.206 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.206 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.206 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.466 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.466 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.466 18:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.466 18:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.466 18:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.466 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.466 { 00:17:25.466 "cntlid": 69, 00:17:25.466 "qid": 0, 00:17:25.466 "state": "enabled", 00:17:25.466 "thread": "nvmf_tgt_poll_group_000", 00:17:25.466 "listen_address": { 00:17:25.466 "trtype": "TCP", 00:17:25.466 "adrfam": "IPv4", 00:17:25.466 "traddr": "10.0.0.2", 00:17:25.466 "trsvcid": "4420" 00:17:25.466 }, 00:17:25.466 "peer_address": { 00:17:25.466 "trtype": "TCP", 00:17:25.466 "adrfam": "IPv4", 00:17:25.466 "traddr": "10.0.0.1", 00:17:25.466 "trsvcid": "34360" 00:17:25.466 }, 00:17:25.466 "auth": { 00:17:25.466 "state": "completed", 00:17:25.466 "digest": "sha384", 00:17:25.466 "dhgroup": "ffdhe3072" 00:17:25.466 } 00:17:25.466 } 00:17:25.466 ]' 00:17:25.466 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.466 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.466 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.466 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:25.466 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.724 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.724 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.724 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.724 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTYzZmQzN2FjOTA3ODVjN2YzM2U3MzhmMDA3NzIwMmU0ZWU4ZGI3Yzg1NzIyYTczzM77jA==: --dhchap-ctrl-secret DHHC-1:01:MTI1MjU0NjUyZTkzOWRhYWFmOGZkOGFmYWUzYmYwZDblwPpd: 00:17:26.291 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.292 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:26.292 18:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.292 18:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.292 18:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.292 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.292 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:26.292 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:26.551 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:26.551 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.551 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:26.551 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:26.551 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:26.551 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.551 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:26.551 18:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.551 18:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.551 18:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.551 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:26.551 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:26.810 00:17:26.810 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.810 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.810 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.068 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.068 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.068 18:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.068 18:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.068 18:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.068 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:27.068 { 00:17:27.068 "cntlid": 71, 00:17:27.068 "qid": 0, 00:17:27.068 "state": "enabled", 00:17:27.068 "thread": "nvmf_tgt_poll_group_000", 00:17:27.068 "listen_address": { 00:17:27.068 "trtype": "TCP", 00:17:27.068 "adrfam": "IPv4", 00:17:27.068 "traddr": "10.0.0.2", 00:17:27.068 "trsvcid": "4420" 00:17:27.068 }, 00:17:27.068 "peer_address": { 00:17:27.068 "trtype": "TCP", 00:17:27.068 "adrfam": "IPv4", 00:17:27.068 "traddr": "10.0.0.1", 00:17:27.068 "trsvcid": "34386" 00:17:27.068 }, 00:17:27.068 "auth": { 00:17:27.068 "state": "completed", 00:17:27.068 "digest": "sha384", 00:17:27.068 "dhgroup": "ffdhe3072" 00:17:27.068 } 00:17:27.068 } 00:17:27.068 ]' 00:17:27.068 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:27.068 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.068 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.068 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:27.068 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.069 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.069 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.069 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.327 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTlkYTZmNDg1ZTk2MmIwNmY4ZGRkMGRhYjg5YjcxZjA2OWM1ZmFmNGE0NzI5ZjQ3MmFlODJlZjI5NDliZmE3MgO4Deg=: 00:17:27.906 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.906 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:27.906 18:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.906 18:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.906 18:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.906 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:27.906 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.906 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:27.906 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:28.166 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:28.166 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:28.166 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:28.166 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:28.166 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:28.166 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.166 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.166 18:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.166 18:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.166 18:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.166 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.166 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.425 00:17:28.425 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.425 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.425 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.425 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.425 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.425 18:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.425 18:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.425 18:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.425 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.425 { 00:17:28.425 "cntlid": 73, 00:17:28.425 "qid": 0, 00:17:28.425 "state": "enabled", 00:17:28.425 "thread": "nvmf_tgt_poll_group_000", 00:17:28.425 "listen_address": { 00:17:28.425 "trtype": "TCP", 00:17:28.425 "adrfam": "IPv4", 00:17:28.425 "traddr": "10.0.0.2", 00:17:28.425 "trsvcid": "4420" 00:17:28.425 }, 00:17:28.425 "peer_address": { 00:17:28.425 "trtype": "TCP", 00:17:28.425 "adrfam": "IPv4", 00:17:28.425 "traddr": "10.0.0.1", 00:17:28.425 "trsvcid": "34410" 00:17:28.425 }, 00:17:28.425 "auth": { 00:17:28.425 "state": "completed", 00:17:28.425 "digest": "sha384", 00:17:28.425 "dhgroup": "ffdhe4096" 00:17:28.425 } 00:17:28.425 } 00:17:28.425 ]' 00:17:28.425 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.684 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.684 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.684 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:28.684 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.684 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.684 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.684 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.943 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjY4NWJiNGE5MTFiN2E4YTBiNDhiZjM3MWIxNDY3ZjExYzE5YmM4MWZkNWY3ODBiVL/WcA==: --dhchap-ctrl-secret DHHC-1:03:OTkyNTNiMjcyYmY3Y2MzMWEwMGE1NTAzMzBhODZlNDMxOTIwMDhkOGM1YTNmNDFhYzVhZjcwYzViNzRhMTNkZJ0Cjc0=: 00:17:29.512 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.512 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:29.512 18:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.512 18:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.512 18:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.512 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.512 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:29.512 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:29.512 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:29.512 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.512 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:29.512 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:29.512 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:29.512 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.512 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.512 18:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.512 18:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.512 18:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.512 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.512 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.770 00:17:29.770 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.770 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.770 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.029 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.029 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.029 18:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.029 18:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.029 18:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.029 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.029 { 00:17:30.029 "cntlid": 75, 00:17:30.029 "qid": 0, 00:17:30.029 "state": "enabled", 00:17:30.029 "thread": "nvmf_tgt_poll_group_000", 00:17:30.029 "listen_address": { 00:17:30.029 "trtype": "TCP", 00:17:30.029 "adrfam": "IPv4", 00:17:30.029 "traddr": "10.0.0.2", 00:17:30.029 "trsvcid": "4420" 00:17:30.029 }, 00:17:30.029 "peer_address": { 00:17:30.029 "trtype": "TCP", 00:17:30.029 "adrfam": "IPv4", 00:17:30.029 "traddr": "10.0.0.1", 00:17:30.029 "trsvcid": "33062" 00:17:30.029 }, 00:17:30.029 "auth": { 00:17:30.029 "state": "completed", 00:17:30.029 "digest": "sha384", 00:17:30.029 "dhgroup": "ffdhe4096" 00:17:30.029 } 00:17:30.029 } 00:17:30.029 ]' 00:17:30.029 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.029 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.029 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.029 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:30.029 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.029 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.029 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.029 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.288 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzM0OGQ2MmY2ODNiYWExOGMxM2EyMmFiMWY2YzMxNWJQoz9+: --dhchap-ctrl-secret DHHC-1:02:ZDJmNDU5MGM5ZDM1Y2ZmYmI2ZGZmM2ZmOWNhZGVhMTg4MDdiZjc4NWE1MjI4OTc3SvDdrg==: 00:17:30.855 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.855 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:30.855 18:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.855 18:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.855 18:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.855 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:30.855 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:30.855 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:31.113 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:31.113 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.113 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:31.113 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:31.113 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:31.113 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.113 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.113 18:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.113 18:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.113 18:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.113 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.113 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.372 00:17:31.372 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.372 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.372 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.372 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.372 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.372 18:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.372 18:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.372 18:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.372 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.372 { 00:17:31.372 "cntlid": 77, 00:17:31.372 "qid": 0, 00:17:31.372 "state": "enabled", 00:17:31.372 "thread": "nvmf_tgt_poll_group_000", 00:17:31.372 "listen_address": { 00:17:31.372 "trtype": "TCP", 00:17:31.372 "adrfam": "IPv4", 00:17:31.372 "traddr": "10.0.0.2", 00:17:31.372 "trsvcid": "4420" 00:17:31.372 }, 00:17:31.372 "peer_address": { 00:17:31.372 "trtype": "TCP", 00:17:31.372 "adrfam": "IPv4", 00:17:31.372 "traddr": "10.0.0.1", 00:17:31.372 "trsvcid": "33094" 00:17:31.372 }, 00:17:31.372 "auth": { 00:17:31.372 "state": "completed", 00:17:31.372 "digest": "sha384", 00:17:31.372 "dhgroup": "ffdhe4096" 00:17:31.372 } 00:17:31.372 } 00:17:31.372 ]' 00:17:31.373 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.631 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.631 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.631 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:31.631 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:31.631 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.631 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.631 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.889 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTYzZmQzN2FjOTA3ODVjN2YzM2U3MzhmMDA3NzIwMmU0ZWU4ZGI3Yzg1NzIyYTczzM77jA==: --dhchap-ctrl-secret DHHC-1:01:MTI1MjU0NjUyZTkzOWRhYWFmOGZkOGFmYWUzYmYwZDblwPpd: 00:17:32.456 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.456 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:32.456 18:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.456 18:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.456 18:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.457 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:32.457 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:32.457 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:32.457 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:17:32.457 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:32.457 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:32.457 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:32.457 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:32.457 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.457 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:32.457 18:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.457 18:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.457 18:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.457 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:32.457 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:32.716 00:17:32.716 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.716 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.716 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.974 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.974 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.974 18:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.974 18:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.974 18:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.974 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.974 { 00:17:32.974 "cntlid": 79, 00:17:32.974 "qid": 0, 00:17:32.974 "state": "enabled", 00:17:32.974 "thread": "nvmf_tgt_poll_group_000", 00:17:32.974 "listen_address": { 00:17:32.974 "trtype": "TCP", 00:17:32.974 "adrfam": "IPv4", 00:17:32.974 "traddr": "10.0.0.2", 00:17:32.974 "trsvcid": "4420" 00:17:32.975 }, 00:17:32.975 "peer_address": { 00:17:32.975 "trtype": "TCP", 00:17:32.975 "adrfam": "IPv4", 00:17:32.975 "traddr": "10.0.0.1", 00:17:32.975 "trsvcid": "33104" 00:17:32.975 }, 00:17:32.975 "auth": { 00:17:32.975 "state": "completed", 00:17:32.975 "digest": "sha384", 00:17:32.975 "dhgroup": "ffdhe4096" 00:17:32.975 } 00:17:32.975 } 00:17:32.975 ]' 00:17:32.975 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.975 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.975 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.975 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:32.975 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.975 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.975 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.975 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.233 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTlkYTZmNDg1ZTk2MmIwNmY4ZGRkMGRhYjg5YjcxZjA2OWM1ZmFmNGE0NzI5ZjQ3MmFlODJlZjI5NDliZmE3MgO4Deg=: 00:17:33.799 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.799 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:33.799 18:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.800 18:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.800 18:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.800 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:33.800 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.800 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:33.800 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:34.058 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:17:34.058 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:34.058 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:34.058 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:34.058 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:34.058 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.058 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.058 18:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.058 18:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.058 18:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.058 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.058 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.317 00:17:34.317 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.317 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.317 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.575 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.575 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.575 18:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.575 18:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.575 18:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.575 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.575 { 00:17:34.575 "cntlid": 81, 00:17:34.575 "qid": 0, 00:17:34.575 "state": "enabled", 00:17:34.575 "thread": "nvmf_tgt_poll_group_000", 00:17:34.575 "listen_address": { 00:17:34.575 "trtype": "TCP", 00:17:34.575 "adrfam": "IPv4", 00:17:34.575 "traddr": "10.0.0.2", 00:17:34.575 "trsvcid": "4420" 00:17:34.575 }, 00:17:34.575 "peer_address": { 00:17:34.575 "trtype": "TCP", 00:17:34.575 "adrfam": "IPv4", 00:17:34.575 "traddr": "10.0.0.1", 00:17:34.575 "trsvcid": "33134" 00:17:34.575 }, 00:17:34.575 "auth": { 00:17:34.575 "state": "completed", 00:17:34.575 "digest": "sha384", 00:17:34.575 "dhgroup": "ffdhe6144" 00:17:34.575 } 00:17:34.575 } 00:17:34.575 ]' 00:17:34.575 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.575 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.575 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.575 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:34.575 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.575 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.575 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.575 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.834 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjY4NWJiNGE5MTFiN2E4YTBiNDhiZjM3MWIxNDY3ZjExYzE5YmM4MWZkNWY3ODBiVL/WcA==: --dhchap-ctrl-secret DHHC-1:03:OTkyNTNiMjcyYmY3Y2MzMWEwMGE1NTAzMzBhODZlNDMxOTIwMDhkOGM1YTNmNDFhYzVhZjcwYzViNzRhMTNkZJ0Cjc0=: 00:17:35.402 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.402 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:35.402 18:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.402 18:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.402 18:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.402 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.402 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:35.402 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:35.661 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:17:35.661 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.661 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:35.661 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:35.661 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:35.661 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.661 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.661 18:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.661 18:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.661 18:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.661 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.661 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.920 00:17:35.920 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:35.920 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.920 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.179 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.179 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.179 18:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.179 18:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.179 18:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.179 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.179 { 00:17:36.179 "cntlid": 83, 00:17:36.179 "qid": 0, 00:17:36.179 "state": "enabled", 00:17:36.179 "thread": "nvmf_tgt_poll_group_000", 00:17:36.179 "listen_address": { 00:17:36.179 "trtype": "TCP", 00:17:36.179 "adrfam": "IPv4", 00:17:36.179 "traddr": "10.0.0.2", 00:17:36.179 "trsvcid": "4420" 00:17:36.179 }, 00:17:36.179 "peer_address": { 00:17:36.179 "trtype": "TCP", 00:17:36.179 "adrfam": "IPv4", 00:17:36.179 "traddr": "10.0.0.1", 00:17:36.179 "trsvcid": "33154" 00:17:36.179 }, 00:17:36.179 "auth": { 00:17:36.179 "state": "completed", 00:17:36.179 "digest": "sha384", 00:17:36.179 "dhgroup": "ffdhe6144" 00:17:36.179 } 00:17:36.179 } 00:17:36.179 ]' 00:17:36.179 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.179 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.179 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.179 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:36.179 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.179 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.179 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.179 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.437 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzM0OGQ2MmY2ODNiYWExOGMxM2EyMmFiMWY2YzMxNWJQoz9+: --dhchap-ctrl-secret DHHC-1:02:ZDJmNDU5MGM5ZDM1Y2ZmYmI2ZGZmM2ZmOWNhZGVhMTg4MDdiZjc4NWE1MjI4OTc3SvDdrg==: 00:17:37.004 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.004 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:37.004 18:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.004 18:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.004 18:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.004 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.004 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:37.004 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:37.004 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:17:37.004 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.004 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:37.004 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:37.004 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:37.004 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.004 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.004 18:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.004 18:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.263 18:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.263 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.263 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.522 00:17:37.522 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.522 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.522 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.781 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.781 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.781 18:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.781 18:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.781 18:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.781 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.781 { 00:17:37.781 "cntlid": 85, 00:17:37.781 "qid": 0, 00:17:37.781 "state": "enabled", 00:17:37.781 "thread": "nvmf_tgt_poll_group_000", 00:17:37.781 "listen_address": { 00:17:37.781 "trtype": "TCP", 00:17:37.781 "adrfam": "IPv4", 00:17:37.781 "traddr": "10.0.0.2", 00:17:37.781 "trsvcid": "4420" 00:17:37.781 }, 00:17:37.781 "peer_address": { 00:17:37.781 "trtype": "TCP", 00:17:37.781 "adrfam": "IPv4", 00:17:37.781 "traddr": "10.0.0.1", 00:17:37.781 "trsvcid": "33178" 00:17:37.781 }, 00:17:37.781 "auth": { 00:17:37.781 "state": "completed", 00:17:37.781 "digest": "sha384", 00:17:37.781 "dhgroup": "ffdhe6144" 00:17:37.781 } 00:17:37.781 } 00:17:37.781 ]' 00:17:37.781 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.781 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.781 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:37.781 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:37.781 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:37.781 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.781 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.781 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.039 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTYzZmQzN2FjOTA3ODVjN2YzM2U3MzhmMDA3NzIwMmU0ZWU4ZGI3Yzg1NzIyYTczzM77jA==: --dhchap-ctrl-secret DHHC-1:01:MTI1MjU0NjUyZTkzOWRhYWFmOGZkOGFmYWUzYmYwZDblwPpd: 00:17:38.612 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.612 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:38.612 18:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.612 18:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.612 18:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.612 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.612 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:38.612 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:38.612 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:17:38.612 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:38.612 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:38.612 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:38.612 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:38.612 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.612 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:38.612 18:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.612 18:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.612 18:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.612 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:38.612 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:39.179 00:17:39.179 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.179 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.179 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.179 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.179 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.179 18:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.179 18:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.179 18:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.179 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.179 { 00:17:39.179 "cntlid": 87, 00:17:39.179 "qid": 0, 00:17:39.179 "state": "enabled", 00:17:39.179 "thread": "nvmf_tgt_poll_group_000", 00:17:39.179 "listen_address": { 00:17:39.179 "trtype": "TCP", 00:17:39.179 "adrfam": "IPv4", 00:17:39.179 "traddr": "10.0.0.2", 00:17:39.179 "trsvcid": "4420" 00:17:39.179 }, 00:17:39.179 "peer_address": { 00:17:39.179 "trtype": "TCP", 00:17:39.179 "adrfam": "IPv4", 00:17:39.179 "traddr": "10.0.0.1", 00:17:39.179 "trsvcid": "47206" 00:17:39.179 }, 00:17:39.179 "auth": { 00:17:39.179 "state": "completed", 00:17:39.179 "digest": "sha384", 00:17:39.179 "dhgroup": "ffdhe6144" 00:17:39.179 } 00:17:39.179 } 00:17:39.179 ]' 00:17:39.179 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.179 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.437 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.437 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:39.437 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.437 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.437 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.437 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.694 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTlkYTZmNDg1ZTk2MmIwNmY4ZGRkMGRhYjg5YjcxZjA2OWM1ZmFmNGE0NzI5ZjQ3MmFlODJlZjI5NDliZmE3MgO4Deg=: 00:17:40.261 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.261 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:40.261 18:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.261 18:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.261 18:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.261 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:40.261 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.261 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:40.261 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:40.261 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:17:40.261 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.261 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:40.261 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:40.261 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:40.261 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.261 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.261 18:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.261 18:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.261 18:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.261 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.261 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.826 00:17:40.826 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.826 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.826 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.083 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.083 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.083 18:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.083 18:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.083 18:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.083 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.083 { 00:17:41.083 "cntlid": 89, 00:17:41.083 "qid": 0, 00:17:41.083 "state": "enabled", 00:17:41.083 "thread": "nvmf_tgt_poll_group_000", 00:17:41.083 "listen_address": { 00:17:41.083 "trtype": "TCP", 00:17:41.083 "adrfam": "IPv4", 00:17:41.083 "traddr": "10.0.0.2", 00:17:41.083 "trsvcid": "4420" 00:17:41.083 }, 00:17:41.083 "peer_address": { 00:17:41.083 "trtype": "TCP", 00:17:41.083 "adrfam": "IPv4", 00:17:41.083 "traddr": "10.0.0.1", 00:17:41.083 "trsvcid": "47234" 00:17:41.083 }, 00:17:41.083 "auth": { 00:17:41.083 "state": "completed", 00:17:41.083 "digest": "sha384", 00:17:41.083 "dhgroup": "ffdhe8192" 00:17:41.083 } 00:17:41.083 } 00:17:41.083 ]' 00:17:41.083 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.083 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.083 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.083 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:41.083 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.083 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.083 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.083 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.341 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjY4NWJiNGE5MTFiN2E4YTBiNDhiZjM3MWIxNDY3ZjExYzE5YmM4MWZkNWY3ODBiVL/WcA==: --dhchap-ctrl-secret DHHC-1:03:OTkyNTNiMjcyYmY3Y2MzMWEwMGE1NTAzMzBhODZlNDMxOTIwMDhkOGM1YTNmNDFhYzVhZjcwYzViNzRhMTNkZJ0Cjc0=: 00:17:41.907 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.907 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:41.907 18:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.907 18:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.907 18:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.907 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.907 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:41.907 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:41.907 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:41.907 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.907 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:41.907 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:41.907 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:41.907 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.907 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.907 18:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.907 18:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.907 18:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.907 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.907 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.474 00:17:42.474 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.474 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.474 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.733 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.733 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.733 18:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.733 18:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.733 18:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.733 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.733 { 00:17:42.733 "cntlid": 91, 00:17:42.733 "qid": 0, 00:17:42.733 "state": "enabled", 00:17:42.733 "thread": "nvmf_tgt_poll_group_000", 00:17:42.733 "listen_address": { 00:17:42.733 "trtype": "TCP", 00:17:42.733 "adrfam": "IPv4", 00:17:42.733 "traddr": "10.0.0.2", 00:17:42.733 "trsvcid": "4420" 00:17:42.733 }, 00:17:42.733 "peer_address": { 00:17:42.733 "trtype": "TCP", 00:17:42.733 "adrfam": "IPv4", 00:17:42.733 "traddr": "10.0.0.1", 00:17:42.733 "trsvcid": "47274" 00:17:42.733 }, 00:17:42.733 "auth": { 00:17:42.733 "state": "completed", 00:17:42.733 "digest": "sha384", 00:17:42.733 "dhgroup": "ffdhe8192" 00:17:42.733 } 00:17:42.733 } 00:17:42.733 ]' 00:17:42.733 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.733 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:42.733 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.733 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:42.733 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.733 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.733 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.733 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.991 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzM0OGQ2MmY2ODNiYWExOGMxM2EyMmFiMWY2YzMxNWJQoz9+: --dhchap-ctrl-secret DHHC-1:02:ZDJmNDU5MGM5ZDM1Y2ZmYmI2ZGZmM2ZmOWNhZGVhMTg4MDdiZjc4NWE1MjI4OTc3SvDdrg==: 00:17:43.558 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.558 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:43.558 18:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.558 18:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.558 18:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.558 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:43.558 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:43.558 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:43.817 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:43.817 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.817 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:43.817 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:43.817 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:43.817 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.817 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.817 18:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.817 18:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.817 18:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.817 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.817 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.076 00:17:44.335 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.335 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.335 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.335 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.335 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.335 18:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.335 18:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.335 18:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.335 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.335 { 00:17:44.335 "cntlid": 93, 00:17:44.335 "qid": 0, 00:17:44.335 "state": "enabled", 00:17:44.335 "thread": "nvmf_tgt_poll_group_000", 00:17:44.335 "listen_address": { 00:17:44.335 "trtype": "TCP", 00:17:44.335 "adrfam": "IPv4", 00:17:44.335 "traddr": "10.0.0.2", 00:17:44.335 "trsvcid": "4420" 00:17:44.335 }, 00:17:44.335 "peer_address": { 00:17:44.335 "trtype": "TCP", 00:17:44.335 "adrfam": "IPv4", 00:17:44.335 "traddr": "10.0.0.1", 00:17:44.335 "trsvcid": "47308" 00:17:44.335 }, 00:17:44.335 "auth": { 00:17:44.335 "state": "completed", 00:17:44.335 "digest": "sha384", 00:17:44.335 "dhgroup": "ffdhe8192" 00:17:44.335 } 00:17:44.335 } 00:17:44.335 ]' 00:17:44.335 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.335 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:44.335 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.593 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:44.593 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.593 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.593 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.593 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.851 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTYzZmQzN2FjOTA3ODVjN2YzM2U3MzhmMDA3NzIwMmU0ZWU4ZGI3Yzg1NzIyYTczzM77jA==: --dhchap-ctrl-secret DHHC-1:01:MTI1MjU0NjUyZTkzOWRhYWFmOGZkOGFmYWUzYmYwZDblwPpd: 00:17:45.419 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.419 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:45.419 18:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.419 18:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.419 18:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.419 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.419 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:45.419 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:45.419 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:17:45.419 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.419 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:45.419 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:45.419 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:45.419 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.419 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:45.419 18:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.419 18:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.419 18:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.419 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:45.419 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:45.987 00:17:45.987 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.987 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.987 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.246 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.246 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.246 18:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.246 18:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.246 18:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.246 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.246 { 00:17:46.246 "cntlid": 95, 00:17:46.246 "qid": 0, 00:17:46.246 "state": "enabled", 00:17:46.246 "thread": "nvmf_tgt_poll_group_000", 00:17:46.246 "listen_address": { 00:17:46.246 "trtype": "TCP", 00:17:46.246 "adrfam": "IPv4", 00:17:46.246 "traddr": "10.0.0.2", 00:17:46.246 "trsvcid": "4420" 00:17:46.246 }, 00:17:46.246 "peer_address": { 00:17:46.246 "trtype": "TCP", 00:17:46.246 "adrfam": "IPv4", 00:17:46.246 "traddr": "10.0.0.1", 00:17:46.246 "trsvcid": "47324" 00:17:46.246 }, 00:17:46.246 "auth": { 00:17:46.246 "state": "completed", 00:17:46.246 "digest": "sha384", 00:17:46.246 "dhgroup": "ffdhe8192" 00:17:46.246 } 00:17:46.246 } 00:17:46.246 ]' 00:17:46.246 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.246 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:46.246 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.246 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:46.246 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.246 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.246 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.246 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.505 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTlkYTZmNDg1ZTk2MmIwNmY4ZGRkMGRhYjg5YjcxZjA2OWM1ZmFmNGE0NzI5ZjQ3MmFlODJlZjI5NDliZmE3MgO4Deg=: 00:17:47.072 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.072 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:47.072 18:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.072 18:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.072 18:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.072 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:47.072 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:47.072 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.072 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:47.072 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:47.339 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:17:47.339 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.339 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:47.339 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:47.339 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:47.339 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.339 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.339 18:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.339 18:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.339 18:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.339 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.339 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.339 00:17:47.651 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.651 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.651 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.651 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.651 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.651 18:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.651 18:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.651 18:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.651 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.651 { 00:17:47.651 "cntlid": 97, 00:17:47.651 "qid": 0, 00:17:47.651 "state": "enabled", 00:17:47.651 "thread": "nvmf_tgt_poll_group_000", 00:17:47.651 "listen_address": { 00:17:47.651 "trtype": "TCP", 00:17:47.651 "adrfam": "IPv4", 00:17:47.651 "traddr": "10.0.0.2", 00:17:47.651 "trsvcid": "4420" 00:17:47.651 }, 00:17:47.651 "peer_address": { 00:17:47.651 "trtype": "TCP", 00:17:47.651 "adrfam": "IPv4", 00:17:47.651 "traddr": "10.0.0.1", 00:17:47.651 "trsvcid": "47358" 00:17:47.651 }, 00:17:47.651 "auth": { 00:17:47.651 "state": "completed", 00:17:47.651 "digest": "sha512", 00:17:47.651 "dhgroup": "null" 00:17:47.651 } 00:17:47.651 } 00:17:47.651 ]' 00:17:47.651 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.651 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.651 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.651 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:47.651 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.941 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.941 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.941 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.941 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjY4NWJiNGE5MTFiN2E4YTBiNDhiZjM3MWIxNDY3ZjExYzE5YmM4MWZkNWY3ODBiVL/WcA==: --dhchap-ctrl-secret DHHC-1:03:OTkyNTNiMjcyYmY3Y2MzMWEwMGE1NTAzMzBhODZlNDMxOTIwMDhkOGM1YTNmNDFhYzVhZjcwYzViNzRhMTNkZJ0Cjc0=: 00:17:48.507 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.507 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:48.507 18:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.507 18:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.507 18:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.507 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.507 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:48.507 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:48.766 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:17:48.766 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.766 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:48.766 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:48.766 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:48.766 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.766 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.766 18:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.766 18:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.766 18:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.766 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.766 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.024 00:17:49.024 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.024 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.024 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.024 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.024 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.024 18:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.024 18:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.282 18:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.282 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.282 { 00:17:49.282 "cntlid": 99, 00:17:49.282 "qid": 0, 00:17:49.282 "state": "enabled", 00:17:49.282 "thread": "nvmf_tgt_poll_group_000", 00:17:49.282 "listen_address": { 00:17:49.282 "trtype": "TCP", 00:17:49.282 "adrfam": "IPv4", 00:17:49.282 "traddr": "10.0.0.2", 00:17:49.282 "trsvcid": "4420" 00:17:49.282 }, 00:17:49.282 "peer_address": { 00:17:49.282 "trtype": "TCP", 00:17:49.282 "adrfam": "IPv4", 00:17:49.282 "traddr": "10.0.0.1", 00:17:49.282 "trsvcid": "34260" 00:17:49.282 }, 00:17:49.282 "auth": { 00:17:49.282 "state": "completed", 00:17:49.282 "digest": "sha512", 00:17:49.282 "dhgroup": "null" 00:17:49.282 } 00:17:49.282 } 00:17:49.282 ]' 00:17:49.282 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.282 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.282 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.282 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:49.282 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.282 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.282 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.282 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.541 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzM0OGQ2MmY2ODNiYWExOGMxM2EyMmFiMWY2YzMxNWJQoz9+: --dhchap-ctrl-secret DHHC-1:02:ZDJmNDU5MGM5ZDM1Y2ZmYmI2ZGZmM2ZmOWNhZGVhMTg4MDdiZjc4NWE1MjI4OTc3SvDdrg==: 00:17:50.108 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.108 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:50.108 18:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.108 18:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.108 18:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.108 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.108 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:50.108 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:50.108 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:17:50.108 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.108 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:50.108 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:50.108 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:50.108 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.108 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.108 18:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.108 18:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.366 18:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.366 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.366 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.366 00:17:50.366 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.366 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.366 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.624 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.624 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.624 18:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.624 18:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.624 18:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.624 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.624 { 00:17:50.624 "cntlid": 101, 00:17:50.624 "qid": 0, 00:17:50.624 "state": "enabled", 00:17:50.624 "thread": "nvmf_tgt_poll_group_000", 00:17:50.624 "listen_address": { 00:17:50.624 "trtype": "TCP", 00:17:50.624 "adrfam": "IPv4", 00:17:50.624 "traddr": "10.0.0.2", 00:17:50.624 "trsvcid": "4420" 00:17:50.624 }, 00:17:50.624 "peer_address": { 00:17:50.624 "trtype": "TCP", 00:17:50.624 "adrfam": "IPv4", 00:17:50.624 "traddr": "10.0.0.1", 00:17:50.624 "trsvcid": "34284" 00:17:50.624 }, 00:17:50.624 "auth": { 00:17:50.624 "state": "completed", 00:17:50.624 "digest": "sha512", 00:17:50.624 "dhgroup": "null" 00:17:50.624 } 00:17:50.624 } 00:17:50.624 ]' 00:17:50.624 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.624 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.624 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.882 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:50.882 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.882 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.882 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.882 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.882 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTYzZmQzN2FjOTA3ODVjN2YzM2U3MzhmMDA3NzIwMmU0ZWU4ZGI3Yzg1NzIyYTczzM77jA==: --dhchap-ctrl-secret DHHC-1:01:MTI1MjU0NjUyZTkzOWRhYWFmOGZkOGFmYWUzYmYwZDblwPpd: 00:17:51.447 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.447 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:51.447 18:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.447 18:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.447 18:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.447 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.447 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:51.447 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:51.706 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:17:51.706 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.706 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:51.706 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:51.706 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:51.706 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.706 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:51.706 18:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.706 18:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.706 18:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.706 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:51.706 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:51.964 00:17:51.964 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.964 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.964 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.223 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.223 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.223 18:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.223 18:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.223 18:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.223 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.223 { 00:17:52.223 "cntlid": 103, 00:17:52.223 "qid": 0, 00:17:52.223 "state": "enabled", 00:17:52.223 "thread": "nvmf_tgt_poll_group_000", 00:17:52.223 "listen_address": { 00:17:52.223 "trtype": "TCP", 00:17:52.223 "adrfam": "IPv4", 00:17:52.223 "traddr": "10.0.0.2", 00:17:52.223 "trsvcid": "4420" 00:17:52.223 }, 00:17:52.223 "peer_address": { 00:17:52.223 "trtype": "TCP", 00:17:52.223 "adrfam": "IPv4", 00:17:52.223 "traddr": "10.0.0.1", 00:17:52.223 "trsvcid": "34308" 00:17:52.223 }, 00:17:52.223 "auth": { 00:17:52.223 "state": "completed", 00:17:52.223 "digest": "sha512", 00:17:52.223 "dhgroup": "null" 00:17:52.223 } 00:17:52.223 } 00:17:52.223 ]' 00:17:52.223 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.223 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.223 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.224 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:52.224 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.224 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.224 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.224 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.482 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTlkYTZmNDg1ZTk2MmIwNmY4ZGRkMGRhYjg5YjcxZjA2OWM1ZmFmNGE0NzI5ZjQ3MmFlODJlZjI5NDliZmE3MgO4Deg=: 00:17:53.049 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.049 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:53.049 18:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.049 18:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.049 18:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.049 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:53.049 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.049 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:53.049 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:53.308 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:17:53.308 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.308 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:53.308 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:53.308 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:53.308 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.308 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.308 18:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.308 18:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.308 18:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.308 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.308 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.308 00:17:53.566 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.566 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.566 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.566 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.566 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.566 18:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.566 18:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.566 18:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.566 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.566 { 00:17:53.566 "cntlid": 105, 00:17:53.566 "qid": 0, 00:17:53.566 "state": "enabled", 00:17:53.566 "thread": "nvmf_tgt_poll_group_000", 00:17:53.566 "listen_address": { 00:17:53.566 "trtype": "TCP", 00:17:53.566 "adrfam": "IPv4", 00:17:53.566 "traddr": "10.0.0.2", 00:17:53.566 "trsvcid": "4420" 00:17:53.566 }, 00:17:53.566 "peer_address": { 00:17:53.566 "trtype": "TCP", 00:17:53.566 "adrfam": "IPv4", 00:17:53.566 "traddr": "10.0.0.1", 00:17:53.566 "trsvcid": "34330" 00:17:53.566 }, 00:17:53.566 "auth": { 00:17:53.566 "state": "completed", 00:17:53.566 "digest": "sha512", 00:17:53.566 "dhgroup": "ffdhe2048" 00:17:53.566 } 00:17:53.566 } 00:17:53.566 ]' 00:17:53.566 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.566 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.566 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.824 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:53.824 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:53.824 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.824 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.824 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.082 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjY4NWJiNGE5MTFiN2E4YTBiNDhiZjM3MWIxNDY3ZjExYzE5YmM4MWZkNWY3ODBiVL/WcA==: --dhchap-ctrl-secret DHHC-1:03:OTkyNTNiMjcyYmY3Y2MzMWEwMGE1NTAzMzBhODZlNDMxOTIwMDhkOGM1YTNmNDFhYzVhZjcwYzViNzRhMTNkZJ0Cjc0=: 00:17:54.648 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.648 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:54.648 18:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.648 18:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.648 18:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.648 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.648 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:54.648 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:54.648 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:17:54.648 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.648 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:54.648 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:54.648 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:54.648 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.648 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.648 18:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.648 18:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.648 18:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.648 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.648 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.905 00:17:54.905 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.905 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.905 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.163 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.163 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.163 18:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.163 18:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.163 18:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.163 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.163 { 00:17:55.163 "cntlid": 107, 00:17:55.163 "qid": 0, 00:17:55.163 "state": "enabled", 00:17:55.163 "thread": "nvmf_tgt_poll_group_000", 00:17:55.163 "listen_address": { 00:17:55.163 "trtype": "TCP", 00:17:55.163 "adrfam": "IPv4", 00:17:55.163 "traddr": "10.0.0.2", 00:17:55.163 "trsvcid": "4420" 00:17:55.163 }, 00:17:55.163 "peer_address": { 00:17:55.163 "trtype": "TCP", 00:17:55.163 "adrfam": "IPv4", 00:17:55.163 "traddr": "10.0.0.1", 00:17:55.163 "trsvcid": "34362" 00:17:55.163 }, 00:17:55.163 "auth": { 00:17:55.163 "state": "completed", 00:17:55.163 "digest": "sha512", 00:17:55.163 "dhgroup": "ffdhe2048" 00:17:55.163 } 00:17:55.163 } 00:17:55.163 ]' 00:17:55.163 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.163 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.163 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.163 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:55.163 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.163 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.163 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.163 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.421 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzM0OGQ2MmY2ODNiYWExOGMxM2EyMmFiMWY2YzMxNWJQoz9+: --dhchap-ctrl-secret DHHC-1:02:ZDJmNDU5MGM5ZDM1Y2ZmYmI2ZGZmM2ZmOWNhZGVhMTg4MDdiZjc4NWE1MjI4OTc3SvDdrg==: 00:17:55.986 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.986 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:55.986 18:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.986 18:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.986 18:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.986 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.986 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:55.986 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:56.245 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:17:56.245 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.245 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:56.245 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:56.245 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:56.245 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.245 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.245 18:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.245 18:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.245 18:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.245 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.245 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.503 00:17:56.503 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.503 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.503 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.503 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.503 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.503 18:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.503 18:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.503 18:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.503 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.503 { 00:17:56.503 "cntlid": 109, 00:17:56.503 "qid": 0, 00:17:56.503 "state": "enabled", 00:17:56.503 "thread": "nvmf_tgt_poll_group_000", 00:17:56.503 "listen_address": { 00:17:56.503 "trtype": "TCP", 00:17:56.504 "adrfam": "IPv4", 00:17:56.504 "traddr": "10.0.0.2", 00:17:56.504 "trsvcid": "4420" 00:17:56.504 }, 00:17:56.504 "peer_address": { 00:17:56.504 "trtype": "TCP", 00:17:56.504 "adrfam": "IPv4", 00:17:56.504 "traddr": "10.0.0.1", 00:17:56.504 "trsvcid": "34390" 00:17:56.504 }, 00:17:56.504 "auth": { 00:17:56.504 "state": "completed", 00:17:56.504 "digest": "sha512", 00:17:56.504 "dhgroup": "ffdhe2048" 00:17:56.504 } 00:17:56.504 } 00:17:56.504 ]' 00:17:56.504 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.761 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.761 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.761 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:56.761 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.761 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.761 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.761 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.018 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTYzZmQzN2FjOTA3ODVjN2YzM2U3MzhmMDA3NzIwMmU0ZWU4ZGI3Yzg1NzIyYTczzM77jA==: --dhchap-ctrl-secret DHHC-1:01:MTI1MjU0NjUyZTkzOWRhYWFmOGZkOGFmYWUzYmYwZDblwPpd: 00:17:57.583 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.583 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:57.583 18:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.583 18:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.583 18:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.583 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.583 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:57.583 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:57.583 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:17:57.583 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.583 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:57.583 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:57.583 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:57.583 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.583 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:57.583 18:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.583 18:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.583 18:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.583 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:57.583 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:57.841 00:17:57.841 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.841 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.841 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.098 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.098 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.098 18:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.098 18:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.098 18:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.098 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.098 { 00:17:58.098 "cntlid": 111, 00:17:58.098 "qid": 0, 00:17:58.098 "state": "enabled", 00:17:58.098 "thread": "nvmf_tgt_poll_group_000", 00:17:58.098 "listen_address": { 00:17:58.098 "trtype": "TCP", 00:17:58.098 "adrfam": "IPv4", 00:17:58.098 "traddr": "10.0.0.2", 00:17:58.098 "trsvcid": "4420" 00:17:58.098 }, 00:17:58.098 "peer_address": { 00:17:58.098 "trtype": "TCP", 00:17:58.098 "adrfam": "IPv4", 00:17:58.098 "traddr": "10.0.0.1", 00:17:58.098 "trsvcid": "34402" 00:17:58.098 }, 00:17:58.098 "auth": { 00:17:58.098 "state": "completed", 00:17:58.098 "digest": "sha512", 00:17:58.098 "dhgroup": "ffdhe2048" 00:17:58.098 } 00:17:58.098 } 00:17:58.098 ]' 00:17:58.098 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.098 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.098 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.098 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:58.098 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.098 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.098 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.099 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.356 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTlkYTZmNDg1ZTk2MmIwNmY4ZGRkMGRhYjg5YjcxZjA2OWM1ZmFmNGE0NzI5ZjQ3MmFlODJlZjI5NDliZmE3MgO4Deg=: 00:17:58.921 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.921 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:58.921 18:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.921 18:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.921 18:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.921 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.921 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.921 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:58.921 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:59.178 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:17:59.178 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.178 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:59.178 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:59.178 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:59.178 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.178 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.178 18:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.178 18:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.178 18:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.178 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.178 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.436 00:17:59.436 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.436 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.436 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.693 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.694 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.694 18:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.694 18:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.694 18:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.694 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.694 { 00:17:59.694 "cntlid": 113, 00:17:59.694 "qid": 0, 00:17:59.694 "state": "enabled", 00:17:59.694 "thread": "nvmf_tgt_poll_group_000", 00:17:59.694 "listen_address": { 00:17:59.694 "trtype": "TCP", 00:17:59.694 "adrfam": "IPv4", 00:17:59.694 "traddr": "10.0.0.2", 00:17:59.694 "trsvcid": "4420" 00:17:59.694 }, 00:17:59.694 "peer_address": { 00:17:59.694 "trtype": "TCP", 00:17:59.694 "adrfam": "IPv4", 00:17:59.694 "traddr": "10.0.0.1", 00:17:59.694 "trsvcid": "36624" 00:17:59.694 }, 00:17:59.694 "auth": { 00:17:59.694 "state": "completed", 00:17:59.694 "digest": "sha512", 00:17:59.694 "dhgroup": "ffdhe3072" 00:17:59.694 } 00:17:59.694 } 00:17:59.694 ]' 00:17:59.694 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.694 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.694 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.694 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:59.694 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.694 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.694 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.694 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.952 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjY4NWJiNGE5MTFiN2E4YTBiNDhiZjM3MWIxNDY3ZjExYzE5YmM4MWZkNWY3ODBiVL/WcA==: --dhchap-ctrl-secret DHHC-1:03:OTkyNTNiMjcyYmY3Y2MzMWEwMGE1NTAzMzBhODZlNDMxOTIwMDhkOGM1YTNmNDFhYzVhZjcwYzViNzRhMTNkZJ0Cjc0=: 00:18:00.518 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.518 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:00.518 18:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.518 18:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.518 18:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.518 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.518 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:00.518 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:00.518 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:18:00.518 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.518 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:00.518 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:00.518 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:00.518 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.518 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.518 18:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.518 18:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.518 18:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.518 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.518 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.775 00:18:00.775 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.775 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.775 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.034 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.034 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.034 18:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.034 18:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.034 18:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.034 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.034 { 00:18:01.034 "cntlid": 115, 00:18:01.034 "qid": 0, 00:18:01.034 "state": "enabled", 00:18:01.034 "thread": "nvmf_tgt_poll_group_000", 00:18:01.034 "listen_address": { 00:18:01.034 "trtype": "TCP", 00:18:01.034 "adrfam": "IPv4", 00:18:01.034 "traddr": "10.0.0.2", 00:18:01.034 "trsvcid": "4420" 00:18:01.034 }, 00:18:01.034 "peer_address": { 00:18:01.034 "trtype": "TCP", 00:18:01.034 "adrfam": "IPv4", 00:18:01.034 "traddr": "10.0.0.1", 00:18:01.034 "trsvcid": "36658" 00:18:01.034 }, 00:18:01.034 "auth": { 00:18:01.034 "state": "completed", 00:18:01.034 "digest": "sha512", 00:18:01.034 "dhgroup": "ffdhe3072" 00:18:01.034 } 00:18:01.034 } 00:18:01.034 ]' 00:18:01.034 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.034 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.034 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.034 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:01.034 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.292 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.292 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.292 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.292 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzM0OGQ2MmY2ODNiYWExOGMxM2EyMmFiMWY2YzMxNWJQoz9+: --dhchap-ctrl-secret DHHC-1:02:ZDJmNDU5MGM5ZDM1Y2ZmYmI2ZGZmM2ZmOWNhZGVhMTg4MDdiZjc4NWE1MjI4OTc3SvDdrg==: 00:18:01.859 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.859 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:01.859 18:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.859 18:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.859 18:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.859 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.859 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:01.859 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:02.117 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:18:02.117 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.117 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:02.117 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:02.117 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:02.117 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.117 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.117 18:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.117 18:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.117 18:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.117 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.117 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.375 00:18:02.375 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.375 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.375 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.633 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.633 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.633 18:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.633 18:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.633 18:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.633 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.633 { 00:18:02.633 "cntlid": 117, 00:18:02.633 "qid": 0, 00:18:02.633 "state": "enabled", 00:18:02.633 "thread": "nvmf_tgt_poll_group_000", 00:18:02.633 "listen_address": { 00:18:02.633 "trtype": "TCP", 00:18:02.633 "adrfam": "IPv4", 00:18:02.633 "traddr": "10.0.0.2", 00:18:02.633 "trsvcid": "4420" 00:18:02.633 }, 00:18:02.633 "peer_address": { 00:18:02.633 "trtype": "TCP", 00:18:02.633 "adrfam": "IPv4", 00:18:02.633 "traddr": "10.0.0.1", 00:18:02.633 "trsvcid": "36696" 00:18:02.633 }, 00:18:02.633 "auth": { 00:18:02.633 "state": "completed", 00:18:02.633 "digest": "sha512", 00:18:02.633 "dhgroup": "ffdhe3072" 00:18:02.633 } 00:18:02.633 } 00:18:02.633 ]' 00:18:02.633 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.633 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.633 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.633 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:02.633 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.633 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.634 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.634 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.892 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTYzZmQzN2FjOTA3ODVjN2YzM2U3MzhmMDA3NzIwMmU0ZWU4ZGI3Yzg1NzIyYTczzM77jA==: --dhchap-ctrl-secret DHHC-1:01:MTI1MjU0NjUyZTkzOWRhYWFmOGZkOGFmYWUzYmYwZDblwPpd: 00:18:03.458 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.458 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:03.458 18:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.458 18:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.458 18:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.458 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.458 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:03.458 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:03.716 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:18:03.716 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.716 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:03.716 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:03.716 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:03.716 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.716 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:03.716 18:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.716 18:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.716 18:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.716 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:03.716 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:03.974 00:18:03.974 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.974 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.974 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.974 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.974 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.974 18:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.974 18:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.974 18:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.974 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.974 { 00:18:03.974 "cntlid": 119, 00:18:03.974 "qid": 0, 00:18:03.974 "state": "enabled", 00:18:03.974 "thread": "nvmf_tgt_poll_group_000", 00:18:03.974 "listen_address": { 00:18:03.974 "trtype": "TCP", 00:18:03.974 "adrfam": "IPv4", 00:18:03.974 "traddr": "10.0.0.2", 00:18:03.974 "trsvcid": "4420" 00:18:03.974 }, 00:18:03.975 "peer_address": { 00:18:03.975 "trtype": "TCP", 00:18:03.975 "adrfam": "IPv4", 00:18:03.975 "traddr": "10.0.0.1", 00:18:03.975 "trsvcid": "36724" 00:18:03.975 }, 00:18:03.975 "auth": { 00:18:03.975 "state": "completed", 00:18:03.975 "digest": "sha512", 00:18:03.975 "dhgroup": "ffdhe3072" 00:18:03.975 } 00:18:03.975 } 00:18:03.975 ]' 00:18:03.975 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.232 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.232 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.232 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:04.232 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.232 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.232 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.232 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.490 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTlkYTZmNDg1ZTk2MmIwNmY4ZGRkMGRhYjg5YjcxZjA2OWM1ZmFmNGE0NzI5ZjQ3MmFlODJlZjI5NDliZmE3MgO4Deg=: 00:18:05.057 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.057 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:05.057 18:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.057 18:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.057 18:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.057 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:05.057 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.057 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:05.057 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:05.057 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:18:05.057 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.057 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:05.057 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:05.057 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:05.057 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.057 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.057 18:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.057 18:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.057 18:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.057 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.057 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.316 00:18:05.316 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.316 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.316 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.574 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.574 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.574 18:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.574 18:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.574 18:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.574 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.574 { 00:18:05.574 "cntlid": 121, 00:18:05.574 "qid": 0, 00:18:05.574 "state": "enabled", 00:18:05.574 "thread": "nvmf_tgt_poll_group_000", 00:18:05.574 "listen_address": { 00:18:05.574 "trtype": "TCP", 00:18:05.574 "adrfam": "IPv4", 00:18:05.574 "traddr": "10.0.0.2", 00:18:05.574 "trsvcid": "4420" 00:18:05.574 }, 00:18:05.574 "peer_address": { 00:18:05.574 "trtype": "TCP", 00:18:05.574 "adrfam": "IPv4", 00:18:05.574 "traddr": "10.0.0.1", 00:18:05.574 "trsvcid": "36738" 00:18:05.574 }, 00:18:05.574 "auth": { 00:18:05.574 "state": "completed", 00:18:05.574 "digest": "sha512", 00:18:05.574 "dhgroup": "ffdhe4096" 00:18:05.574 } 00:18:05.574 } 00:18:05.574 ]' 00:18:05.574 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.574 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.574 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.574 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:05.574 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.833 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.833 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.833 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.833 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjY4NWJiNGE5MTFiN2E4YTBiNDhiZjM3MWIxNDY3ZjExYzE5YmM4MWZkNWY3ODBiVL/WcA==: --dhchap-ctrl-secret DHHC-1:03:OTkyNTNiMjcyYmY3Y2MzMWEwMGE1NTAzMzBhODZlNDMxOTIwMDhkOGM1YTNmNDFhYzVhZjcwYzViNzRhMTNkZJ0Cjc0=: 00:18:06.400 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.400 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:06.400 18:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.400 18:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.400 18:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.400 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.400 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:06.400 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:06.658 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:18:06.658 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.658 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:06.658 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:06.658 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:06.658 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.658 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.658 18:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.658 18:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.658 18:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.658 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.659 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.916 00:18:06.916 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.916 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.916 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.174 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.174 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.174 18:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.174 18:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.174 18:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.174 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.174 { 00:18:07.175 "cntlid": 123, 00:18:07.175 "qid": 0, 00:18:07.175 "state": "enabled", 00:18:07.175 "thread": "nvmf_tgt_poll_group_000", 00:18:07.175 "listen_address": { 00:18:07.175 "trtype": "TCP", 00:18:07.175 "adrfam": "IPv4", 00:18:07.175 "traddr": "10.0.0.2", 00:18:07.175 "trsvcid": "4420" 00:18:07.175 }, 00:18:07.175 "peer_address": { 00:18:07.175 "trtype": "TCP", 00:18:07.175 "adrfam": "IPv4", 00:18:07.175 "traddr": "10.0.0.1", 00:18:07.175 "trsvcid": "36764" 00:18:07.175 }, 00:18:07.175 "auth": { 00:18:07.175 "state": "completed", 00:18:07.175 "digest": "sha512", 00:18:07.175 "dhgroup": "ffdhe4096" 00:18:07.175 } 00:18:07.175 } 00:18:07.175 ]' 00:18:07.175 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.175 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.175 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.175 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:07.175 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.175 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.175 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.175 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.433 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzM0OGQ2MmY2ODNiYWExOGMxM2EyMmFiMWY2YzMxNWJQoz9+: --dhchap-ctrl-secret DHHC-1:02:ZDJmNDU5MGM5ZDM1Y2ZmYmI2ZGZmM2ZmOWNhZGVhMTg4MDdiZjc4NWE1MjI4OTc3SvDdrg==: 00:18:08.000 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.000 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:08.000 18:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.000 18:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.000 18:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.000 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.000 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:08.000 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:08.000 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:18:08.000 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.000 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:08.000 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:08.000 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:08.000 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.000 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.000 18:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.000 18:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.001 18:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.001 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.001 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.259 00:18:08.259 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.259 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.259 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.517 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.517 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.517 18:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.517 18:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.517 18:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.517 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.517 { 00:18:08.517 "cntlid": 125, 00:18:08.517 "qid": 0, 00:18:08.518 "state": "enabled", 00:18:08.518 "thread": "nvmf_tgt_poll_group_000", 00:18:08.518 "listen_address": { 00:18:08.518 "trtype": "TCP", 00:18:08.518 "adrfam": "IPv4", 00:18:08.518 "traddr": "10.0.0.2", 00:18:08.518 "trsvcid": "4420" 00:18:08.518 }, 00:18:08.518 "peer_address": { 00:18:08.518 "trtype": "TCP", 00:18:08.518 "adrfam": "IPv4", 00:18:08.518 "traddr": "10.0.0.1", 00:18:08.518 "trsvcid": "49386" 00:18:08.518 }, 00:18:08.518 "auth": { 00:18:08.518 "state": "completed", 00:18:08.518 "digest": "sha512", 00:18:08.518 "dhgroup": "ffdhe4096" 00:18:08.518 } 00:18:08.518 } 00:18:08.518 ]' 00:18:08.518 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.518 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.518 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.776 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:08.776 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.776 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.776 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.776 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.776 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTYzZmQzN2FjOTA3ODVjN2YzM2U3MzhmMDA3NzIwMmU0ZWU4ZGI3Yzg1NzIyYTczzM77jA==: --dhchap-ctrl-secret DHHC-1:01:MTI1MjU0NjUyZTkzOWRhYWFmOGZkOGFmYWUzYmYwZDblwPpd: 00:18:09.342 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.342 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:09.342 18:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.342 18:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.342 18:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.342 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.342 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:09.342 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:09.601 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:18:09.601 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.601 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:09.601 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:09.601 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:09.601 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.601 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:09.601 18:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.601 18:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.601 18:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.601 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:09.601 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:09.860 00:18:09.860 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.860 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.860 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.120 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.120 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.120 18:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.120 18:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.120 18:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.120 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.120 { 00:18:10.120 "cntlid": 127, 00:18:10.120 "qid": 0, 00:18:10.120 "state": "enabled", 00:18:10.120 "thread": "nvmf_tgt_poll_group_000", 00:18:10.120 "listen_address": { 00:18:10.120 "trtype": "TCP", 00:18:10.120 "adrfam": "IPv4", 00:18:10.120 "traddr": "10.0.0.2", 00:18:10.120 "trsvcid": "4420" 00:18:10.120 }, 00:18:10.120 "peer_address": { 00:18:10.120 "trtype": "TCP", 00:18:10.120 "adrfam": "IPv4", 00:18:10.120 "traddr": "10.0.0.1", 00:18:10.120 "trsvcid": "49412" 00:18:10.120 }, 00:18:10.120 "auth": { 00:18:10.120 "state": "completed", 00:18:10.120 "digest": "sha512", 00:18:10.120 "dhgroup": "ffdhe4096" 00:18:10.120 } 00:18:10.120 } 00:18:10.120 ]' 00:18:10.120 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.120 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.120 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.120 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:10.120 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.120 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.120 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.120 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.415 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTlkYTZmNDg1ZTk2MmIwNmY4ZGRkMGRhYjg5YjcxZjA2OWM1ZmFmNGE0NzI5ZjQ3MmFlODJlZjI5NDliZmE3MgO4Deg=: 00:18:11.009 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.009 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:11.009 18:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.009 18:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.009 18:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.009 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:11.009 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.009 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:11.010 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:11.010 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:18:11.010 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.010 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:11.010 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:11.010 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:11.010 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.010 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.010 18:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.010 18:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.010 18:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.010 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.010 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.576 00:18:11.576 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.576 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.576 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.576 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.576 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.576 18:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.576 18:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.576 18:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.576 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.576 { 00:18:11.576 "cntlid": 129, 00:18:11.576 "qid": 0, 00:18:11.576 "state": "enabled", 00:18:11.576 "thread": "nvmf_tgt_poll_group_000", 00:18:11.576 "listen_address": { 00:18:11.576 "trtype": "TCP", 00:18:11.576 "adrfam": "IPv4", 00:18:11.576 "traddr": "10.0.0.2", 00:18:11.576 "trsvcid": "4420" 00:18:11.576 }, 00:18:11.576 "peer_address": { 00:18:11.576 "trtype": "TCP", 00:18:11.576 "adrfam": "IPv4", 00:18:11.576 "traddr": "10.0.0.1", 00:18:11.576 "trsvcid": "49440" 00:18:11.576 }, 00:18:11.576 "auth": { 00:18:11.576 "state": "completed", 00:18:11.576 "digest": "sha512", 00:18:11.576 "dhgroup": "ffdhe6144" 00:18:11.576 } 00:18:11.576 } 00:18:11.576 ]' 00:18:11.576 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.835 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.835 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.835 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:11.835 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.835 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.835 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.835 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.094 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjY4NWJiNGE5MTFiN2E4YTBiNDhiZjM3MWIxNDY3ZjExYzE5YmM4MWZkNWY3ODBiVL/WcA==: --dhchap-ctrl-secret DHHC-1:03:OTkyNTNiMjcyYmY3Y2MzMWEwMGE1NTAzMzBhODZlNDMxOTIwMDhkOGM1YTNmNDFhYzVhZjcwYzViNzRhMTNkZJ0Cjc0=: 00:18:12.660 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.660 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:12.660 18:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.660 18:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.660 18:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.660 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.660 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:12.660 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:12.660 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:18:12.660 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.660 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:12.660 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:12.660 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:12.660 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.660 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.660 18:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.660 18:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.660 18:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.660 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.660 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.227 00:18:13.227 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.227 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.227 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.227 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.227 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.227 18:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.227 18:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.227 18:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.227 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.227 { 00:18:13.227 "cntlid": 131, 00:18:13.227 "qid": 0, 00:18:13.227 "state": "enabled", 00:18:13.227 "thread": "nvmf_tgt_poll_group_000", 00:18:13.227 "listen_address": { 00:18:13.227 "trtype": "TCP", 00:18:13.227 "adrfam": "IPv4", 00:18:13.227 "traddr": "10.0.0.2", 00:18:13.227 "trsvcid": "4420" 00:18:13.227 }, 00:18:13.227 "peer_address": { 00:18:13.227 "trtype": "TCP", 00:18:13.227 "adrfam": "IPv4", 00:18:13.227 "traddr": "10.0.0.1", 00:18:13.227 "trsvcid": "49470" 00:18:13.227 }, 00:18:13.227 "auth": { 00:18:13.227 "state": "completed", 00:18:13.227 "digest": "sha512", 00:18:13.227 "dhgroup": "ffdhe6144" 00:18:13.227 } 00:18:13.227 } 00:18:13.227 ]' 00:18:13.227 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.227 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.227 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.486 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:13.486 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.486 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.486 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.486 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.486 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzM0OGQ2MmY2ODNiYWExOGMxM2EyMmFiMWY2YzMxNWJQoz9+: --dhchap-ctrl-secret DHHC-1:02:ZDJmNDU5MGM5ZDM1Y2ZmYmI2ZGZmM2ZmOWNhZGVhMTg4MDdiZjc4NWE1MjI4OTc3SvDdrg==: 00:18:14.053 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.053 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:14.053 18:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.053 18:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.053 18:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.053 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.053 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:14.053 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:14.311 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:14.311 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.311 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:14.311 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:14.311 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:14.311 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.311 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.311 18:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.311 18:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.311 18:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.311 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.311 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.569 00:18:14.569 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.569 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.569 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.827 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.827 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.827 18:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.828 18:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.828 18:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.828 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.828 { 00:18:14.828 "cntlid": 133, 00:18:14.828 "qid": 0, 00:18:14.828 "state": "enabled", 00:18:14.828 "thread": "nvmf_tgt_poll_group_000", 00:18:14.828 "listen_address": { 00:18:14.828 "trtype": "TCP", 00:18:14.828 "adrfam": "IPv4", 00:18:14.828 "traddr": "10.0.0.2", 00:18:14.828 "trsvcid": "4420" 00:18:14.828 }, 00:18:14.828 "peer_address": { 00:18:14.828 "trtype": "TCP", 00:18:14.828 "adrfam": "IPv4", 00:18:14.828 "traddr": "10.0.0.1", 00:18:14.828 "trsvcid": "49502" 00:18:14.828 }, 00:18:14.828 "auth": { 00:18:14.828 "state": "completed", 00:18:14.828 "digest": "sha512", 00:18:14.828 "dhgroup": "ffdhe6144" 00:18:14.828 } 00:18:14.828 } 00:18:14.828 ]' 00:18:14.828 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.828 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.828 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.828 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:14.828 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.086 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.086 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.086 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.086 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTYzZmQzN2FjOTA3ODVjN2YzM2U3MzhmMDA3NzIwMmU0ZWU4ZGI3Yzg1NzIyYTczzM77jA==: --dhchap-ctrl-secret DHHC-1:01:MTI1MjU0NjUyZTkzOWRhYWFmOGZkOGFmYWUzYmYwZDblwPpd: 00:18:15.653 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.653 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:15.653 18:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.653 18:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.653 18:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.653 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.653 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:15.653 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:15.911 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:18:15.911 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.911 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:15.911 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:15.911 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:15.911 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.911 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:15.911 18:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.911 18:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.911 18:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.911 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:15.911 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:16.170 00:18:16.170 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.170 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.170 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.430 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.430 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.430 18:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.430 18:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.430 18:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.430 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.430 { 00:18:16.430 "cntlid": 135, 00:18:16.430 "qid": 0, 00:18:16.430 "state": "enabled", 00:18:16.430 "thread": "nvmf_tgt_poll_group_000", 00:18:16.430 "listen_address": { 00:18:16.430 "trtype": "TCP", 00:18:16.430 "adrfam": "IPv4", 00:18:16.430 "traddr": "10.0.0.2", 00:18:16.430 "trsvcid": "4420" 00:18:16.430 }, 00:18:16.430 "peer_address": { 00:18:16.430 "trtype": "TCP", 00:18:16.430 "adrfam": "IPv4", 00:18:16.430 "traddr": "10.0.0.1", 00:18:16.430 "trsvcid": "49520" 00:18:16.430 }, 00:18:16.430 "auth": { 00:18:16.430 "state": "completed", 00:18:16.430 "digest": "sha512", 00:18:16.430 "dhgroup": "ffdhe6144" 00:18:16.430 } 00:18:16.430 } 00:18:16.430 ]' 00:18:16.430 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.430 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.430 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.430 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:16.430 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.688 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.688 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.688 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.688 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTlkYTZmNDg1ZTk2MmIwNmY4ZGRkMGRhYjg5YjcxZjA2OWM1ZmFmNGE0NzI5ZjQ3MmFlODJlZjI5NDliZmE3MgO4Deg=: 00:18:17.256 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.256 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:17.256 18:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.256 18:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.256 18:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.256 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:17.256 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.256 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:17.256 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:17.515 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:18:17.515 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.515 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:17.515 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:17.515 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:17.515 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.515 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.515 18:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.515 18:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.515 18:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.515 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.515 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.081 00:18:18.081 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.081 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.081 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.081 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.081 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.081 18:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.081 18:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.081 18:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.081 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.081 { 00:18:18.081 "cntlid": 137, 00:18:18.081 "qid": 0, 00:18:18.081 "state": "enabled", 00:18:18.081 "thread": "nvmf_tgt_poll_group_000", 00:18:18.081 "listen_address": { 00:18:18.081 "trtype": "TCP", 00:18:18.081 "adrfam": "IPv4", 00:18:18.081 "traddr": "10.0.0.2", 00:18:18.081 "trsvcid": "4420" 00:18:18.081 }, 00:18:18.081 "peer_address": { 00:18:18.081 "trtype": "TCP", 00:18:18.081 "adrfam": "IPv4", 00:18:18.081 "traddr": "10.0.0.1", 00:18:18.081 "trsvcid": "49544" 00:18:18.081 }, 00:18:18.081 "auth": { 00:18:18.081 "state": "completed", 00:18:18.081 "digest": "sha512", 00:18:18.081 "dhgroup": "ffdhe8192" 00:18:18.081 } 00:18:18.081 } 00:18:18.081 ]' 00:18:18.081 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.339 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.339 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.339 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:18.339 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.339 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.339 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.339 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.596 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjY4NWJiNGE5MTFiN2E4YTBiNDhiZjM3MWIxNDY3ZjExYzE5YmM4MWZkNWY3ODBiVL/WcA==: --dhchap-ctrl-secret DHHC-1:03:OTkyNTNiMjcyYmY3Y2MzMWEwMGE1NTAzMzBhODZlNDMxOTIwMDhkOGM1YTNmNDFhYzVhZjcwYzViNzRhMTNkZJ0Cjc0=: 00:18:19.163 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.163 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:19.163 18:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.163 18:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.163 18:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.163 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.163 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:19.163 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:19.163 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:18:19.163 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.163 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:19.163 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:19.163 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:19.163 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.163 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.163 18:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.163 18:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.163 18:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.163 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.163 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.729 00:18:19.729 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.729 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.729 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.987 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.987 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.987 18:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.987 18:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.987 18:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.987 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.987 { 00:18:19.987 "cntlid": 139, 00:18:19.987 "qid": 0, 00:18:19.987 "state": "enabled", 00:18:19.987 "thread": "nvmf_tgt_poll_group_000", 00:18:19.987 "listen_address": { 00:18:19.987 "trtype": "TCP", 00:18:19.987 "adrfam": "IPv4", 00:18:19.987 "traddr": "10.0.0.2", 00:18:19.987 "trsvcid": "4420" 00:18:19.987 }, 00:18:19.987 "peer_address": { 00:18:19.987 "trtype": "TCP", 00:18:19.987 "adrfam": "IPv4", 00:18:19.987 "traddr": "10.0.0.1", 00:18:19.987 "trsvcid": "49790" 00:18:19.987 }, 00:18:19.987 "auth": { 00:18:19.987 "state": "completed", 00:18:19.987 "digest": "sha512", 00:18:19.987 "dhgroup": "ffdhe8192" 00:18:19.987 } 00:18:19.987 } 00:18:19.987 ]' 00:18:19.987 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.987 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.987 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.987 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:19.987 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.987 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.987 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.987 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.245 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzM0OGQ2MmY2ODNiYWExOGMxM2EyMmFiMWY2YzMxNWJQoz9+: --dhchap-ctrl-secret DHHC-1:02:ZDJmNDU5MGM5ZDM1Y2ZmYmI2ZGZmM2ZmOWNhZGVhMTg4MDdiZjc4NWE1MjI4OTc3SvDdrg==: 00:18:20.811 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.811 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:20.811 18:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.811 18:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.811 18:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.811 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.811 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:20.811 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:20.811 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:18:20.811 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.811 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:20.811 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:20.811 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:20.811 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.812 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.812 18:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.812 18:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.070 18:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.070 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.070 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.328 00:18:21.328 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.328 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.328 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.587 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.587 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.587 18:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.587 18:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.587 18:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.587 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.587 { 00:18:21.587 "cntlid": 141, 00:18:21.587 "qid": 0, 00:18:21.587 "state": "enabled", 00:18:21.587 "thread": "nvmf_tgt_poll_group_000", 00:18:21.587 "listen_address": { 00:18:21.587 "trtype": "TCP", 00:18:21.587 "adrfam": "IPv4", 00:18:21.587 "traddr": "10.0.0.2", 00:18:21.587 "trsvcid": "4420" 00:18:21.587 }, 00:18:21.587 "peer_address": { 00:18:21.587 "trtype": "TCP", 00:18:21.587 "adrfam": "IPv4", 00:18:21.587 "traddr": "10.0.0.1", 00:18:21.587 "trsvcid": "49818" 00:18:21.587 }, 00:18:21.587 "auth": { 00:18:21.587 "state": "completed", 00:18:21.587 "digest": "sha512", 00:18:21.587 "dhgroup": "ffdhe8192" 00:18:21.587 } 00:18:21.587 } 00:18:21.587 ]' 00:18:21.587 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.587 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:21.587 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.587 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:21.587 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.846 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.846 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.846 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.846 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTYzZmQzN2FjOTA3ODVjN2YzM2U3MzhmMDA3NzIwMmU0ZWU4ZGI3Yzg1NzIyYTczzM77jA==: --dhchap-ctrl-secret DHHC-1:01:MTI1MjU0NjUyZTkzOWRhYWFmOGZkOGFmYWUzYmYwZDblwPpd: 00:18:22.413 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.413 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:22.413 18:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.413 18:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.413 18:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.413 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.413 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:22.413 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:22.672 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:18:22.672 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.672 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:22.672 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:22.672 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:22.672 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.672 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:22.672 18:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.672 18:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.672 18:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.672 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.672 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.252 00:18:23.252 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.252 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.252 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.252 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.252 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.252 18:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.252 18:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.252 18:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.252 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.252 { 00:18:23.252 "cntlid": 143, 00:18:23.252 "qid": 0, 00:18:23.252 "state": "enabled", 00:18:23.252 "thread": "nvmf_tgt_poll_group_000", 00:18:23.252 "listen_address": { 00:18:23.252 "trtype": "TCP", 00:18:23.252 "adrfam": "IPv4", 00:18:23.252 "traddr": "10.0.0.2", 00:18:23.252 "trsvcid": "4420" 00:18:23.252 }, 00:18:23.252 "peer_address": { 00:18:23.252 "trtype": "TCP", 00:18:23.252 "adrfam": "IPv4", 00:18:23.252 "traddr": "10.0.0.1", 00:18:23.252 "trsvcid": "49830" 00:18:23.252 }, 00:18:23.252 "auth": { 00:18:23.252 "state": "completed", 00:18:23.252 "digest": "sha512", 00:18:23.252 "dhgroup": "ffdhe8192" 00:18:23.252 } 00:18:23.252 } 00:18:23.252 ]' 00:18:23.252 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.252 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:23.252 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.510 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:23.510 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.510 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.510 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.510 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.510 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTlkYTZmNDg1ZTk2MmIwNmY4ZGRkMGRhYjg5YjcxZjA2OWM1ZmFmNGE0NzI5ZjQ3MmFlODJlZjI5NDliZmE3MgO4Deg=: 00:18:24.077 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.077 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:24.077 18:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.077 18:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.077 18:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.077 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:24.077 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:18:24.077 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:24.077 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:24.077 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:24.077 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:24.336 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:18:24.336 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.336 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:24.336 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:24.336 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:24.336 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.336 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.336 18:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.336 18:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.336 18:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.336 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.336 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.904 00:18:24.904 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.904 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.904 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.163 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.163 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.163 18:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.163 18:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.163 18:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.163 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.163 { 00:18:25.163 "cntlid": 145, 00:18:25.163 "qid": 0, 00:18:25.163 "state": "enabled", 00:18:25.163 "thread": "nvmf_tgt_poll_group_000", 00:18:25.163 "listen_address": { 00:18:25.163 "trtype": "TCP", 00:18:25.163 "adrfam": "IPv4", 00:18:25.163 "traddr": "10.0.0.2", 00:18:25.163 "trsvcid": "4420" 00:18:25.163 }, 00:18:25.163 "peer_address": { 00:18:25.163 "trtype": "TCP", 00:18:25.163 "adrfam": "IPv4", 00:18:25.163 "traddr": "10.0.0.1", 00:18:25.163 "trsvcid": "49852" 00:18:25.163 }, 00:18:25.163 "auth": { 00:18:25.163 "state": "completed", 00:18:25.163 "digest": "sha512", 00:18:25.163 "dhgroup": "ffdhe8192" 00:18:25.163 } 00:18:25.163 } 00:18:25.163 ]' 00:18:25.163 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.163 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:25.163 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.163 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:25.163 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.163 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.163 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.163 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.421 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjY4NWJiNGE5MTFiN2E4YTBiNDhiZjM3MWIxNDY3ZjExYzE5YmM4MWZkNWY3ODBiVL/WcA==: --dhchap-ctrl-secret DHHC-1:03:OTkyNTNiMjcyYmY3Y2MzMWEwMGE1NTAzMzBhODZlNDMxOTIwMDhkOGM1YTNmNDFhYzVhZjcwYzViNzRhMTNkZJ0Cjc0=: 00:18:25.989 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.989 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:25.989 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.989 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.989 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.989 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:25.989 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.989 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.989 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.989 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:25.989 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:25.989 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:25.989 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:25.989 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:25.989 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:25.989 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:25.989 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:25.989 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:26.248 request: 00:18:26.248 { 00:18:26.248 "name": "nvme0", 00:18:26.248 "trtype": "tcp", 00:18:26.248 "traddr": "10.0.0.2", 00:18:26.248 "adrfam": "ipv4", 00:18:26.248 "trsvcid": "4420", 00:18:26.248 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:26.248 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:26.248 "prchk_reftag": false, 00:18:26.248 "prchk_guard": false, 00:18:26.248 "hdgst": false, 00:18:26.248 "ddgst": false, 00:18:26.248 "dhchap_key": "key2", 00:18:26.248 "method": "bdev_nvme_attach_controller", 00:18:26.248 "req_id": 1 00:18:26.248 } 00:18:26.248 Got JSON-RPC error response 00:18:26.248 response: 00:18:26.248 { 00:18:26.248 "code": -5, 00:18:26.248 "message": "Input/output error" 00:18:26.248 } 00:18:26.248 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:26.248 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:26.248 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:26.248 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:26.248 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:26.248 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.248 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.507 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.507 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.507 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.507 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.507 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.507 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:26.507 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:26.507 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:26.507 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:26.507 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:26.507 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:26.507 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:26.507 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:26.507 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:26.766 request: 00:18:26.766 { 00:18:26.766 "name": "nvme0", 00:18:26.766 "trtype": "tcp", 00:18:26.766 "traddr": "10.0.0.2", 00:18:26.766 "adrfam": "ipv4", 00:18:26.766 "trsvcid": "4420", 00:18:26.766 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:26.766 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:26.766 "prchk_reftag": false, 00:18:26.766 "prchk_guard": false, 00:18:26.766 "hdgst": false, 00:18:26.766 "ddgst": false, 00:18:26.766 "dhchap_key": "key1", 00:18:26.766 "dhchap_ctrlr_key": "ckey2", 00:18:26.766 "method": "bdev_nvme_attach_controller", 00:18:26.766 "req_id": 1 00:18:26.766 } 00:18:26.766 Got JSON-RPC error response 00:18:26.766 response: 00:18:26.766 { 00:18:26.766 "code": -5, 00:18:26.766 "message": "Input/output error" 00:18:26.766 } 00:18:26.766 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:26.766 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:26.766 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:26.766 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:26.766 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:26.766 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.766 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.766 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.766 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:26.766 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.766 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.766 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.767 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.767 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:26.767 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.767 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:26.767 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:26.767 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:26.767 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:26.767 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.767 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.335 request: 00:18:27.335 { 00:18:27.335 "name": "nvme0", 00:18:27.335 "trtype": "tcp", 00:18:27.335 "traddr": "10.0.0.2", 00:18:27.335 "adrfam": "ipv4", 00:18:27.335 "trsvcid": "4420", 00:18:27.335 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:27.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:27.335 "prchk_reftag": false, 00:18:27.335 "prchk_guard": false, 00:18:27.335 "hdgst": false, 00:18:27.335 "ddgst": false, 00:18:27.335 "dhchap_key": "key1", 00:18:27.335 "dhchap_ctrlr_key": "ckey1", 00:18:27.335 "method": "bdev_nvme_attach_controller", 00:18:27.335 "req_id": 1 00:18:27.335 } 00:18:27.335 Got JSON-RPC error response 00:18:27.335 response: 00:18:27.335 { 00:18:27.335 "code": -5, 00:18:27.335 "message": "Input/output error" 00:18:27.335 } 00:18:27.335 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:27.335 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:27.335 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:27.335 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:27.335 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:27.335 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.335 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.335 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.335 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3912424 00:18:27.335 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3912424 ']' 00:18:27.335 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3912424 00:18:27.335 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:27.335 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:27.335 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3912424 00:18:27.335 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:27.335 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:27.335 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3912424' 00:18:27.335 killing process with pid 3912424 00:18:27.335 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3912424 00:18:27.335 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3912424 00:18:27.594 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:27.594 18:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:27.594 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:27.594 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.594 18:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3933110 00:18:27.594 18:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3933110 00:18:27.594 18:33:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:27.594 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3933110 ']' 00:18:27.594 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.594 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.594 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.594 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.594 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.530 18:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.530 18:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:28.530 18:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:28.530 18:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:28.530 18:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.530 18:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.530 18:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:28.530 18:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3933110 00:18:28.530 18:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3933110 ']' 00:18:28.530 18:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.530 18:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:28.530 18:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.530 18:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:28.530 18:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.530 18:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.530 18:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:28.530 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:18:28.530 18:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.530 18:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.789 18:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.789 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:18:28.789 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.789 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:28.789 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:28.789 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:28.789 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.789 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:28.789 18:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.789 18:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.789 18:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.789 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:28.789 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:29.048 00:18:29.048 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.048 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.048 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.306 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.306 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.306 18:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.306 18:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.306 18:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.306 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.306 { 00:18:29.306 "cntlid": 1, 00:18:29.306 "qid": 0, 00:18:29.306 "state": "enabled", 00:18:29.306 "thread": "nvmf_tgt_poll_group_000", 00:18:29.306 "listen_address": { 00:18:29.306 "trtype": "TCP", 00:18:29.306 "adrfam": "IPv4", 00:18:29.306 "traddr": "10.0.0.2", 00:18:29.306 "trsvcid": "4420" 00:18:29.306 }, 00:18:29.306 "peer_address": { 00:18:29.306 "trtype": "TCP", 00:18:29.306 "adrfam": "IPv4", 00:18:29.306 "traddr": "10.0.0.1", 00:18:29.306 "trsvcid": "42670" 00:18:29.306 }, 00:18:29.306 "auth": { 00:18:29.306 "state": "completed", 00:18:29.306 "digest": "sha512", 00:18:29.306 "dhgroup": "ffdhe8192" 00:18:29.306 } 00:18:29.306 } 00:18:29.306 ]' 00:18:29.306 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.306 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:29.306 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.564 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:29.564 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.564 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.564 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.564 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.564 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTlkYTZmNDg1ZTk2MmIwNmY4ZGRkMGRhYjg5YjcxZjA2OWM1ZmFmNGE0NzI5ZjQ3MmFlODJlZjI5NDliZmE3MgO4Deg=: 00:18:30.131 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.131 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:30.131 18:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.131 18:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.131 18:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.131 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:30.131 18:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.131 18:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.131 18:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.131 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:30.131 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:30.390 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.390 18:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:30.390 18:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.390 18:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:30.390 18:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:30.390 18:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:30.390 18:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:30.390 18:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.390 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.649 request: 00:18:30.649 { 00:18:30.649 "name": "nvme0", 00:18:30.649 "trtype": "tcp", 00:18:30.649 "traddr": "10.0.0.2", 00:18:30.649 "adrfam": "ipv4", 00:18:30.649 "trsvcid": "4420", 00:18:30.649 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:30.649 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:30.649 "prchk_reftag": false, 00:18:30.649 "prchk_guard": false, 00:18:30.649 "hdgst": false, 00:18:30.649 "ddgst": false, 00:18:30.649 "dhchap_key": "key3", 00:18:30.649 "method": "bdev_nvme_attach_controller", 00:18:30.649 "req_id": 1 00:18:30.649 } 00:18:30.649 Got JSON-RPC error response 00:18:30.649 response: 00:18:30.649 { 00:18:30.649 "code": -5, 00:18:30.649 "message": "Input/output error" 00:18:30.649 } 00:18:30.649 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:30.649 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:30.649 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:30.649 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:30.649 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:18:30.649 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:18:30.649 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:30.649 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:30.908 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.908 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:30.908 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.908 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:30.908 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:30.908 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:30.908 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:30.908 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.908 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.908 request: 00:18:30.908 { 00:18:30.908 "name": "nvme0", 00:18:30.908 "trtype": "tcp", 00:18:30.908 "traddr": "10.0.0.2", 00:18:30.908 "adrfam": "ipv4", 00:18:30.908 "trsvcid": "4420", 00:18:30.908 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:30.908 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:30.908 "prchk_reftag": false, 00:18:30.908 "prchk_guard": false, 00:18:30.908 "hdgst": false, 00:18:30.908 "ddgst": false, 00:18:30.908 "dhchap_key": "key3", 00:18:30.908 "method": "bdev_nvme_attach_controller", 00:18:30.908 "req_id": 1 00:18:30.908 } 00:18:30.908 Got JSON-RPC error response 00:18:30.908 response: 00:18:30.908 { 00:18:30.908 "code": -5, 00:18:30.908 "message": "Input/output error" 00:18:30.908 } 00:18:30.908 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:30.908 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:30.908 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:30.908 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:30.908 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:30.908 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:18:30.908 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:30.908 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:30.908 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:30.908 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:31.167 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:31.167 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.167 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.167 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.167 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:31.167 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.167 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.167 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.167 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:31.167 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:31.167 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:31.167 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:31.167 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:31.167 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:31.167 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:31.167 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:31.167 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:31.425 request: 00:18:31.425 { 00:18:31.425 "name": "nvme0", 00:18:31.425 "trtype": "tcp", 00:18:31.425 "traddr": "10.0.0.2", 00:18:31.425 "adrfam": "ipv4", 00:18:31.425 "trsvcid": "4420", 00:18:31.425 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:31.425 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:31.425 "prchk_reftag": false, 00:18:31.425 "prchk_guard": false, 00:18:31.425 "hdgst": false, 00:18:31.425 "ddgst": false, 00:18:31.425 "dhchap_key": "key0", 00:18:31.425 "dhchap_ctrlr_key": "key1", 00:18:31.425 "method": "bdev_nvme_attach_controller", 00:18:31.425 "req_id": 1 00:18:31.425 } 00:18:31.425 Got JSON-RPC error response 00:18:31.425 response: 00:18:31.425 { 00:18:31.425 "code": -5, 00:18:31.425 "message": "Input/output error" 00:18:31.425 } 00:18:31.425 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:31.425 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:31.425 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:31.425 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:31.425 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:31.425 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:31.425 00:18:31.682 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:18:31.682 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:18:31.682 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.682 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.682 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.682 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.939 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:18:31.939 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:18:31.939 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3912452 00:18:31.939 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3912452 ']' 00:18:31.939 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3912452 00:18:31.939 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:31.939 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:31.939 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3912452 00:18:31.939 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:31.939 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:31.939 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3912452' 00:18:31.939 killing process with pid 3912452 00:18:31.939 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3912452 00:18:31.939 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3912452 00:18:32.197 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:32.197 18:33:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:32.197 18:33:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:32.197 18:33:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:32.197 18:33:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:32.197 18:33:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:32.197 18:33:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:32.197 rmmod nvme_tcp 00:18:32.197 rmmod nvme_fabrics 00:18:32.197 rmmod nvme_keyring 00:18:32.455 18:33:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:32.455 18:33:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:32.455 18:33:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:32.455 18:33:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3933110 ']' 00:18:32.455 18:33:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3933110 00:18:32.455 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3933110 ']' 00:18:32.455 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3933110 00:18:32.455 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:32.455 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:32.455 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3933110 00:18:32.455 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:32.455 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:32.455 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3933110' 00:18:32.455 killing process with pid 3933110 00:18:32.455 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3933110 00:18:32.455 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3933110 00:18:32.455 18:33:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:32.455 18:33:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:32.455 18:33:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:32.455 18:33:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:32.455 18:33:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:32.455 18:33:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.455 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:32.455 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.990 18:33:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:34.990 18:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.YXt /tmp/spdk.key-sha256.dCQ /tmp/spdk.key-sha384.3WT /tmp/spdk.key-sha512.6tg /tmp/spdk.key-sha512.5Ru /tmp/spdk.key-sha384.Hh1 /tmp/spdk.key-sha256.wCB '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:34.990 00:18:34.990 real 2m10.657s 00:18:34.990 user 4m59.659s 00:18:34.990 sys 0m20.669s 00:18:34.990 18:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:34.990 18:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.990 ************************************ 00:18:34.990 END TEST nvmf_auth_target 00:18:34.990 ************************************ 00:18:34.990 18:33:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:34.990 18:33:20 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:18:34.990 18:33:20 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:34.990 18:33:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:34.990 18:33:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:34.990 18:33:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:34.990 ************************************ 00:18:34.990 START TEST nvmf_bdevio_no_huge 00:18:34.990 ************************************ 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:34.990 * Looking for test storage... 00:18:34.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:34.990 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:34.991 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:34.991 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:34.991 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:34.991 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:34.991 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.991 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:34.991 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.991 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:34.991 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:34.991 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:18:34.991 18:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:40.309 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:40.309 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:18:40.309 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:40.309 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:40.309 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:40.309 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:40.309 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:40.309 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:18:40.309 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:40.309 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:18:40.309 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:18:40.309 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:18:40.309 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:18:40.309 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:18:40.309 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:18:40.309 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:40.309 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:40.309 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:40.309 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:40.309 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:40.309 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:40.309 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:40.310 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:40.310 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:40.310 Found net devices under 0000:86:00.0: cvl_0_0 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:40.310 Found net devices under 0000:86:00.1: cvl_0_1 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:40.310 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:40.569 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:40.569 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:40.569 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:40.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:40.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:18:40.569 00:18:40.569 --- 10.0.0.2 ping statistics --- 00:18:40.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.569 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:18:40.569 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:40.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:40.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:18:40.569 00:18:40.569 --- 10.0.0.1 ping statistics --- 00:18:40.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.569 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:18:40.569 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:40.569 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:18:40.569 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:40.569 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:40.569 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:40.569 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:40.569 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:40.569 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:40.569 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:40.569 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:40.569 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:40.569 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:40.569 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:40.569 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3937372 00:18:40.569 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3937372 00:18:40.569 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:40.569 18:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 3937372 ']' 00:18:40.569 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.569 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:40.569 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.569 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:40.569 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:40.569 [2024-07-15 18:33:26.048119] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:18:40.569 [2024-07-15 18:33:26.048163] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:40.569 [2024-07-15 18:33:26.122796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:40.827 [2024-07-15 18:33:26.204008] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:40.827 [2024-07-15 18:33:26.204049] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:40.827 [2024-07-15 18:33:26.204056] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:40.827 [2024-07-15 18:33:26.204061] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:40.827 [2024-07-15 18:33:26.204066] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:40.827 [2024-07-15 18:33:26.204191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:40.827 [2024-07-15 18:33:26.204299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:40.827 [2024-07-15 18:33:26.204405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:40.827 [2024-07-15 18:33:26.204406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:41.392 [2024-07-15 18:33:26.886740] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:41.392 Malloc0 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:41.392 [2024-07-15 18:33:26.931025] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:41.392 { 00:18:41.392 "params": { 00:18:41.392 "name": "Nvme$subsystem", 00:18:41.392 "trtype": "$TEST_TRANSPORT", 00:18:41.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:41.392 "adrfam": "ipv4", 00:18:41.392 "trsvcid": "$NVMF_PORT", 00:18:41.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:41.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:41.392 "hdgst": ${hdgst:-false}, 00:18:41.392 "ddgst": ${ddgst:-false} 00:18:41.392 }, 00:18:41.392 "method": "bdev_nvme_attach_controller" 00:18:41.392 } 00:18:41.392 EOF 00:18:41.392 )") 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:18:41.392 18:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:41.392 "params": { 00:18:41.392 "name": "Nvme1", 00:18:41.392 "trtype": "tcp", 00:18:41.392 "traddr": "10.0.0.2", 00:18:41.392 "adrfam": "ipv4", 00:18:41.392 "trsvcid": "4420", 00:18:41.393 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.393 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:41.393 "hdgst": false, 00:18:41.393 "ddgst": false 00:18:41.393 }, 00:18:41.393 "method": "bdev_nvme_attach_controller" 00:18:41.393 }' 00:18:41.650 [2024-07-15 18:33:26.978567] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:18:41.650 [2024-07-15 18:33:26.978612] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3937623 ] 00:18:41.650 [2024-07-15 18:33:27.048609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:41.650 [2024-07-15 18:33:27.133422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.650 [2024-07-15 18:33:27.133528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.650 [2024-07-15 18:33:27.133530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.908 I/O targets: 00:18:41.908 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:41.908 00:18:41.908 00:18:41.908 CUnit - A unit testing framework for C - Version 2.1-3 00:18:41.908 http://cunit.sourceforge.net/ 00:18:41.908 00:18:41.908 00:18:41.908 Suite: bdevio tests on: Nvme1n1 00:18:41.908 Test: blockdev write read block ...passed 00:18:42.166 Test: blockdev write zeroes read block ...passed 00:18:42.166 Test: blockdev write zeroes read no split ...passed 00:18:42.166 Test: blockdev write zeroes read split ...passed 00:18:42.166 Test: blockdev write zeroes read split partial ...passed 00:18:42.166 Test: blockdev reset ...[2024-07-15 18:33:27.524254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:42.166 [2024-07-15 18:33:27.524317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaea300 (9): Bad file descriptor 00:18:42.166 [2024-07-15 18:33:27.659488] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:42.166 passed 00:18:42.166 Test: blockdev write read 8 blocks ...passed 00:18:42.166 Test: blockdev write read size > 128k ...passed 00:18:42.166 Test: blockdev write read invalid size ...passed 00:18:42.166 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:42.166 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:42.166 Test: blockdev write read max offset ...passed 00:18:42.424 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:42.424 Test: blockdev writev readv 8 blocks ...passed 00:18:42.424 Test: blockdev writev readv 30 x 1block ...passed 00:18:42.424 Test: blockdev writev readv block ...passed 00:18:42.424 Test: blockdev writev readv size > 128k ...passed 00:18:42.424 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:42.424 Test: blockdev comparev and writev ...[2024-07-15 18:33:27.869940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:42.424 [2024-07-15 18:33:27.869968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.424 [2024-07-15 18:33:27.869982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:42.424 [2024-07-15 18:33:27.869990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:42.424 [2024-07-15 18:33:27.870222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:42.424 [2024-07-15 18:33:27.870232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:42.424 [2024-07-15 18:33:27.870243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:42.424 [2024-07-15 18:33:27.870249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:42.424 [2024-07-15 18:33:27.870483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:42.424 [2024-07-15 18:33:27.870493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:42.424 [2024-07-15 18:33:27.870504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:42.424 [2024-07-15 18:33:27.870510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:42.424 [2024-07-15 18:33:27.870727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:42.424 [2024-07-15 18:33:27.870736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:42.424 [2024-07-15 18:33:27.870747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:42.424 [2024-07-15 18:33:27.870753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:42.424 passed 00:18:42.424 Test: blockdev nvme passthru rw ...passed 00:18:42.424 Test: blockdev nvme passthru vendor specific ...[2024-07-15 18:33:27.952757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:42.424 [2024-07-15 18:33:27.952773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:42.424 [2024-07-15 18:33:27.952873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:42.424 [2024-07-15 18:33:27.952886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:42.424 [2024-07-15 18:33:27.952990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:42.424 [2024-07-15 18:33:27.952999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:42.424 [2024-07-15 18:33:27.953098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:42.424 [2024-07-15 18:33:27.953107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:42.424 passed 00:18:42.424 Test: blockdev nvme admin passthru ...passed 00:18:42.682 Test: blockdev copy ...passed 00:18:42.682 00:18:42.682 Run Summary: Type Total Ran Passed Failed Inactive 00:18:42.682 suites 1 1 n/a 0 0 00:18:42.682 tests 23 23 23 0 0 00:18:42.682 asserts 152 152 152 0 n/a 00:18:42.682 00:18:42.682 Elapsed time = 1.223 seconds 00:18:42.940 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:42.940 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.940 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:42.940 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.940 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:42.940 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:42.940 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:42.940 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:18:42.940 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:42.940 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:18:42.940 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:42.940 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:42.940 rmmod nvme_tcp 00:18:42.940 rmmod nvme_fabrics 00:18:42.940 rmmod nvme_keyring 00:18:42.940 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:42.940 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:18:42.940 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:18:42.940 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3937372 ']' 00:18:42.940 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3937372 00:18:42.940 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 3937372 ']' 00:18:42.940 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 3937372 00:18:42.940 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:18:42.940 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:42.940 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3937372 00:18:42.940 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:18:42.940 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:18:42.940 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3937372' 00:18:42.940 killing process with pid 3937372 00:18:42.940 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 3937372 00:18:42.940 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 3937372 00:18:43.198 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:43.199 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:43.199 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:43.199 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:43.199 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:43.199 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.199 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:43.199 18:33:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.733 18:33:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:45.733 00:18:45.733 real 0m10.635s 00:18:45.733 user 0m13.767s 00:18:45.733 sys 0m5.180s 00:18:45.733 18:33:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:45.733 18:33:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:45.733 ************************************ 00:18:45.733 END TEST nvmf_bdevio_no_huge 00:18:45.733 ************************************ 00:18:45.733 18:33:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:45.733 18:33:30 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:45.733 18:33:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:45.733 18:33:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:45.733 18:33:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:45.733 ************************************ 00:18:45.733 START TEST nvmf_tls 00:18:45.733 ************************************ 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:45.733 * Looking for test storage... 00:18:45.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:18:45.733 18:33:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:51.017 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:51.017 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:51.017 Found net devices under 0000:86:00.0: cvl_0_0 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:51.017 Found net devices under 0000:86:00.1: cvl_0_1 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:51.017 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:51.018 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:51.018 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:51.018 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:51.018 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:51.018 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:51.018 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:51.018 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:51.018 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:51.018 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:51.018 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:51.018 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:51.018 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:51.018 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:51.018 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:51.018 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:51.018 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:51.018 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:51.277 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:51.277 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:51.277 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:51.277 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:51.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:18:51.277 00:18:51.277 --- 10.0.0.2 ping statistics --- 00:18:51.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.277 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:18:51.277 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:51.277 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:51.277 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:18:51.277 00:18:51.277 --- 10.0.0.1 ping statistics --- 00:18:51.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.277 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:18:51.277 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:51.277 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:18:51.277 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:51.277 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:51.277 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:51.277 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:51.277 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:51.277 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:51.277 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:51.277 18:33:36 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:51.277 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:51.277 18:33:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:51.277 18:33:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.277 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3941370 00:18:51.277 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3941370 00:18:51.277 18:33:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:51.277 18:33:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3941370 ']' 00:18:51.277 18:33:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.277 18:33:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:51.277 18:33:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.278 18:33:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:51.278 18:33:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.278 [2024-07-15 18:33:36.759829] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:18:51.278 [2024-07-15 18:33:36.759875] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:51.278 EAL: No free 2048 kB hugepages reported on node 1 00:18:51.278 [2024-07-15 18:33:36.829748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.536 [2024-07-15 18:33:36.906648] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:51.536 [2024-07-15 18:33:36.906680] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:51.536 [2024-07-15 18:33:36.906687] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:51.536 [2024-07-15 18:33:36.906693] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:51.536 [2024-07-15 18:33:36.906698] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:51.536 [2024-07-15 18:33:36.906716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.103 18:33:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:52.103 18:33:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:52.103 18:33:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:52.103 18:33:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:52.103 18:33:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.103 18:33:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:52.103 18:33:37 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:52.103 18:33:37 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:52.361 true 00:18:52.361 18:33:37 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:52.361 18:33:37 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:18:52.620 18:33:37 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:18:52.620 18:33:37 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:52.620 18:33:37 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:52.620 18:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:52.620 18:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:18:52.878 18:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:18:52.878 18:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:52.878 18:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:53.136 18:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:53.136 18:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:18:53.136 18:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:18:53.136 18:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:53.136 18:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:53.136 18:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:53.394 18:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:18:53.394 18:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:53.394 18:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:53.652 18:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:53.652 18:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:53.652 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:18:53.652 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:53.652 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:53.911 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:53.911 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:54.170 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:18:54.170 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:54.170 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:54.170 18:33:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:54.170 18:33:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:54.170 18:33:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:54.170 18:33:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:18:54.170 18:33:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:54.170 18:33:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:54.170 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:54.170 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:54.170 18:33:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:54.170 18:33:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:54.170 18:33:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:54.170 18:33:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:18:54.170 18:33:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:54.170 18:33:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:54.170 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:54.170 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:18:54.170 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.HxdiaOj16L 00:18:54.170 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:54.170 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.nd7EeHGzvR 00:18:54.170 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:54.170 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:54.170 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.HxdiaOj16L 00:18:54.170 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.nd7EeHGzvR 00:18:54.170 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:54.428 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:54.428 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.HxdiaOj16L 00:18:54.428 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.HxdiaOj16L 00:18:54.428 18:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:54.687 [2024-07-15 18:33:40.133934] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:54.687 18:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:54.945 18:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:54.945 [2024-07-15 18:33:40.470769] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:54.945 [2024-07-15 18:33:40.470940] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:54.945 18:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:55.205 malloc0 00:18:55.205 18:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:55.464 18:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HxdiaOj16L 00:18:55.464 [2024-07-15 18:33:40.972247] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:55.464 18:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.HxdiaOj16L 00:18:55.464 EAL: No free 2048 kB hugepages reported on node 1 00:19:07.667 Initializing NVMe Controllers 00:19:07.667 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:07.667 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:07.667 Initialization complete. Launching workers. 00:19:07.667 ======================================================== 00:19:07.667 Latency(us) 00:19:07.667 Device Information : IOPS MiB/s Average min max 00:19:07.667 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17070.55 66.68 3749.54 784.15 5920.08 00:19:07.667 ======================================================== 00:19:07.667 Total : 17070.55 66.68 3749.54 784.15 5920.08 00:19:07.667 00:19:07.667 18:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HxdiaOj16L 00:19:07.667 18:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:07.667 18:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:07.667 18:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:07.667 18:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.HxdiaOj16L' 00:19:07.667 18:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:07.667 18:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3943719 00:19:07.667 18:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:07.667 18:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:07.667 18:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3943719 /var/tmp/bdevperf.sock 00:19:07.667 18:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3943719 ']' 00:19:07.667 18:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:07.667 18:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:07.667 18:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:07.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:07.667 18:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:07.667 18:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.667 [2024-07-15 18:33:51.150734] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:19:07.667 [2024-07-15 18:33:51.150782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3943719 ] 00:19:07.667 EAL: No free 2048 kB hugepages reported on node 1 00:19:07.667 [2024-07-15 18:33:51.215071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.667 [2024-07-15 18:33:51.286293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.667 18:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:07.667 18:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:07.667 18:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HxdiaOj16L 00:19:07.667 [2024-07-15 18:33:52.080219] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:07.667 [2024-07-15 18:33:52.080285] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:07.667 TLSTESTn1 00:19:07.667 18:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:07.667 Running I/O for 10 seconds... 00:19:17.641 00:19:17.641 Latency(us) 00:19:17.641 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.641 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:17.641 Verification LBA range: start 0x0 length 0x2000 00:19:17.641 TLSTESTn1 : 10.04 5281.43 20.63 0.00 0.00 24177.42 5398.92 33454.57 00:19:17.641 =================================================================================================================== 00:19:17.641 Total : 5281.43 20.63 0.00 0.00 24177.42 5398.92 33454.57 00:19:17.641 0 00:19:17.641 18:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:17.641 18:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3943719 00:19:17.641 18:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3943719 ']' 00:19:17.641 18:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3943719 00:19:17.641 18:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:17.641 18:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:17.641 18:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3943719 00:19:17.641 18:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:17.641 18:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:17.641 18:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3943719' 00:19:17.641 killing process with pid 3943719 00:19:17.641 18:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3943719 00:19:17.641 Received shutdown signal, test time was about 10.000000 seconds 00:19:17.641 00:19:17.641 Latency(us) 00:19:17.641 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.641 =================================================================================================================== 00:19:17.641 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:17.641 [2024-07-15 18:34:02.360808] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:17.641 18:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3943719 00:19:17.641 18:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nd7EeHGzvR 00:19:17.641 18:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:17.641 18:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nd7EeHGzvR 00:19:17.641 18:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:17.641 18:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:17.641 18:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:17.641 18:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:17.641 18:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nd7EeHGzvR 00:19:17.641 18:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:17.641 18:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:17.641 18:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:17.641 18:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.nd7EeHGzvR' 00:19:17.641 18:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:17.641 18:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3945571 00:19:17.641 18:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:17.641 18:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:17.641 18:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3945571 /var/tmp/bdevperf.sock 00:19:17.642 18:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3945571 ']' 00:19:17.642 18:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:17.642 18:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:17.642 18:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:17.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:17.642 18:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:17.642 18:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.642 [2024-07-15 18:34:02.585383] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:19:17.642 [2024-07-15 18:34:02.585426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3945571 ] 00:19:17.642 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.642 [2024-07-15 18:34:02.638333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.642 [2024-07-15 18:34:02.704582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.956 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:17.956 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:17.956 18:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nd7EeHGzvR 00:19:18.214 [2024-07-15 18:34:03.558239] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:18.214 [2024-07-15 18:34:03.558311] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:18.214 [2024-07-15 18:34:03.562932] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:18.214 [2024-07-15 18:34:03.563562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e71570 (107): Transport endpoint is not connected 00:19:18.214 [2024-07-15 18:34:03.564554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e71570 (9): Bad file descriptor 00:19:18.214 [2024-07-15 18:34:03.565555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:18.214 [2024-07-15 18:34:03.565565] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:18.214 [2024-07-15 18:34:03.565573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:18.214 request: 00:19:18.214 { 00:19:18.214 "name": "TLSTEST", 00:19:18.214 "trtype": "tcp", 00:19:18.214 "traddr": "10.0.0.2", 00:19:18.214 "adrfam": "ipv4", 00:19:18.214 "trsvcid": "4420", 00:19:18.214 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:18.214 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:18.214 "prchk_reftag": false, 00:19:18.214 "prchk_guard": false, 00:19:18.214 "hdgst": false, 00:19:18.214 "ddgst": false, 00:19:18.214 "psk": "/tmp/tmp.nd7EeHGzvR", 00:19:18.214 "method": "bdev_nvme_attach_controller", 00:19:18.214 "req_id": 1 00:19:18.214 } 00:19:18.214 Got JSON-RPC error response 00:19:18.214 response: 00:19:18.214 { 00:19:18.214 "code": -5, 00:19:18.214 "message": "Input/output error" 00:19:18.214 } 00:19:18.214 18:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3945571 00:19:18.214 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3945571 ']' 00:19:18.214 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3945571 00:19:18.214 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:18.214 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:18.214 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3945571 00:19:18.214 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:18.214 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:18.214 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3945571' 00:19:18.214 killing process with pid 3945571 00:19:18.214 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3945571 00:19:18.214 Received shutdown signal, test time was about 10.000000 seconds 00:19:18.214 00:19:18.214 Latency(us) 00:19:18.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.214 =================================================================================================================== 00:19:18.214 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:18.214 [2024-07-15 18:34:03.632666] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:18.214 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3945571 00:19:18.473 18:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:18.473 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:18.473 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:18.473 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:18.473 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:18.473 18:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.HxdiaOj16L 00:19:18.473 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:18.473 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.HxdiaOj16L 00:19:18.473 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:18.473 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:18.473 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:18.473 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:18.473 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.HxdiaOj16L 00:19:18.473 18:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:18.473 18:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:18.473 18:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:18.473 18:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.HxdiaOj16L' 00:19:18.473 18:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:18.473 18:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3945806 00:19:18.473 18:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:18.473 18:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:18.473 18:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3945806 /var/tmp/bdevperf.sock 00:19:18.473 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3945806 ']' 00:19:18.473 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:18.473 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:18.473 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:18.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:18.473 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:18.473 18:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.473 [2024-07-15 18:34:03.850940] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:19:18.473 [2024-07-15 18:34:03.850988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3945806 ] 00:19:18.473 EAL: No free 2048 kB hugepages reported on node 1 00:19:18.473 [2024-07-15 18:34:03.911376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.473 [2024-07-15 18:34:03.977329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:19.408 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:19.408 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:19.408 18:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.HxdiaOj16L 00:19:19.408 [2024-07-15 18:34:04.806133] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:19.408 [2024-07-15 18:34:04.806211] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:19.408 [2024-07-15 18:34:04.813407] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:19.408 [2024-07-15 18:34:04.813430] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:19.408 [2024-07-15 18:34:04.813454] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:19.408 [2024-07-15 18:34:04.814314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75c570 (107): Transport endpoint is not connected 00:19:19.408 [2024-07-15 18:34:04.815307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75c570 (9): Bad file descriptor 00:19:19.408 [2024-07-15 18:34:04.816308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:19.408 [2024-07-15 18:34:04.816318] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:19.408 [2024-07-15 18:34:04.816326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:19.408 request: 00:19:19.408 { 00:19:19.408 "name": "TLSTEST", 00:19:19.408 "trtype": "tcp", 00:19:19.408 "traddr": "10.0.0.2", 00:19:19.408 "adrfam": "ipv4", 00:19:19.408 "trsvcid": "4420", 00:19:19.408 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:19.408 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:19.408 "prchk_reftag": false, 00:19:19.408 "prchk_guard": false, 00:19:19.408 "hdgst": false, 00:19:19.408 "ddgst": false, 00:19:19.408 "psk": "/tmp/tmp.HxdiaOj16L", 00:19:19.408 "method": "bdev_nvme_attach_controller", 00:19:19.408 "req_id": 1 00:19:19.408 } 00:19:19.408 Got JSON-RPC error response 00:19:19.408 response: 00:19:19.408 { 00:19:19.408 "code": -5, 00:19:19.408 "message": "Input/output error" 00:19:19.408 } 00:19:19.408 18:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3945806 00:19:19.408 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3945806 ']' 00:19:19.408 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3945806 00:19:19.408 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:19.408 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:19.408 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3945806 00:19:19.408 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:19.408 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:19.408 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3945806' 00:19:19.408 killing process with pid 3945806 00:19:19.408 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3945806 00:19:19.408 Received shutdown signal, test time was about 10.000000 seconds 00:19:19.408 00:19:19.408 Latency(us) 00:19:19.408 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.409 =================================================================================================================== 00:19:19.409 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:19.409 [2024-07-15 18:34:04.874707] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:19.409 18:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3945806 00:19:19.667 18:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:19.668 18:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:19.668 18:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:19.668 18:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:19.668 18:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:19.668 18:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.HxdiaOj16L 00:19:19.668 18:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:19.668 18:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.HxdiaOj16L 00:19:19.668 18:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:19.668 18:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:19.668 18:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:19.668 18:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:19.668 18:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.HxdiaOj16L 00:19:19.668 18:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:19.668 18:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:19.668 18:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:19.668 18:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.HxdiaOj16L' 00:19:19.668 18:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:19.668 18:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:19.668 18:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3946038 00:19:19.668 18:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:19.668 18:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3946038 /var/tmp/bdevperf.sock 00:19:19.668 18:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3946038 ']' 00:19:19.668 18:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:19.668 18:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:19.668 18:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:19.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:19.668 18:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:19.668 18:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.668 [2024-07-15 18:34:05.082350] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:19:19.668 [2024-07-15 18:34:05.082396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3946038 ] 00:19:19.668 EAL: No free 2048 kB hugepages reported on node 1 00:19:19.668 [2024-07-15 18:34:05.149750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.668 [2024-07-15 18:34:05.224950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.605 18:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:20.605 18:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:20.605 18:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HxdiaOj16L 00:19:20.605 [2024-07-15 18:34:06.035521] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:20.605 [2024-07-15 18:34:06.035598] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:20.605 [2024-07-15 18:34:06.046552] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:20.605 [2024-07-15 18:34:06.046574] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:20.605 [2024-07-15 18:34:06.046598] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:20.605 [2024-07-15 18:34:06.046748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1267570 (107): Transport endpoint is not connected 00:19:20.605 [2024-07-15 18:34:06.047735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1267570 (9): Bad file descriptor 00:19:20.605 [2024-07-15 18:34:06.048737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:20.605 [2024-07-15 18:34:06.048748] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:20.605 [2024-07-15 18:34:06.048758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:20.605 request: 00:19:20.605 { 00:19:20.605 "name": "TLSTEST", 00:19:20.605 "trtype": "tcp", 00:19:20.605 "traddr": "10.0.0.2", 00:19:20.605 "adrfam": "ipv4", 00:19:20.605 "trsvcid": "4420", 00:19:20.605 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:20.605 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:20.605 "prchk_reftag": false, 00:19:20.605 "prchk_guard": false, 00:19:20.605 "hdgst": false, 00:19:20.605 "ddgst": false, 00:19:20.605 "psk": "/tmp/tmp.HxdiaOj16L", 00:19:20.605 "method": "bdev_nvme_attach_controller", 00:19:20.605 "req_id": 1 00:19:20.605 } 00:19:20.605 Got JSON-RPC error response 00:19:20.605 response: 00:19:20.605 { 00:19:20.605 "code": -5, 00:19:20.605 "message": "Input/output error" 00:19:20.605 } 00:19:20.605 18:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3946038 00:19:20.605 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3946038 ']' 00:19:20.605 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3946038 00:19:20.605 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:20.605 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:20.605 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3946038 00:19:20.605 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:20.605 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:20.605 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3946038' 00:19:20.605 killing process with pid 3946038 00:19:20.605 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3946038 00:19:20.605 Received shutdown signal, test time was about 10.000000 seconds 00:19:20.605 00:19:20.605 Latency(us) 00:19:20.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.605 =================================================================================================================== 00:19:20.605 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:20.605 [2024-07-15 18:34:06.121963] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:20.606 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3946038 00:19:20.865 18:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:20.865 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:20.865 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:20.865 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:20.865 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:20.865 18:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:20.865 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:20.865 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:20.865 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:20.865 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:20.865 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:20.865 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:20.865 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:20.865 18:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:20.865 18:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:20.865 18:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:20.865 18:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:20.865 18:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:20.865 18:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3946259 00:19:20.865 18:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:20.865 18:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:20.865 18:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3946259 /var/tmp/bdevperf.sock 00:19:20.865 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3946259 ']' 00:19:20.865 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:20.865 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:20.865 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:20.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:20.865 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:20.865 18:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.865 [2024-07-15 18:34:06.342890] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:19:20.865 [2024-07-15 18:34:06.342937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3946259 ] 00:19:20.865 EAL: No free 2048 kB hugepages reported on node 1 00:19:20.865 [2024-07-15 18:34:06.399527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.124 [2024-07-15 18:34:06.467016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.691 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:21.691 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:21.691 18:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:21.963 [2024-07-15 18:34:07.311232] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:21.963 [2024-07-15 18:34:07.313124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118caf0 (9): Bad file descriptor 00:19:21.963 [2024-07-15 18:34:07.314123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:21.963 [2024-07-15 18:34:07.314133] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:21.963 [2024-07-15 18:34:07.314141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:21.963 request: 00:19:21.963 { 00:19:21.963 "name": "TLSTEST", 00:19:21.963 "trtype": "tcp", 00:19:21.963 "traddr": "10.0.0.2", 00:19:21.963 "adrfam": "ipv4", 00:19:21.963 "trsvcid": "4420", 00:19:21.963 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.963 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:21.963 "prchk_reftag": false, 00:19:21.963 "prchk_guard": false, 00:19:21.963 "hdgst": false, 00:19:21.963 "ddgst": false, 00:19:21.963 "method": "bdev_nvme_attach_controller", 00:19:21.963 "req_id": 1 00:19:21.963 } 00:19:21.963 Got JSON-RPC error response 00:19:21.963 response: 00:19:21.963 { 00:19:21.963 "code": -5, 00:19:21.963 "message": "Input/output error" 00:19:21.963 } 00:19:21.963 18:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3946259 00:19:21.963 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3946259 ']' 00:19:21.963 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3946259 00:19:21.963 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:21.963 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:21.963 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3946259 00:19:21.963 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:21.963 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:21.963 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3946259' 00:19:21.963 killing process with pid 3946259 00:19:21.963 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3946259 00:19:21.963 Received shutdown signal, test time was about 10.000000 seconds 00:19:21.963 00:19:21.963 Latency(us) 00:19:21.963 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.964 =================================================================================================================== 00:19:21.964 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:21.964 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3946259 00:19:22.234 18:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:22.234 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:22.234 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:22.234 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:22.234 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:22.234 18:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3941370 00:19:22.234 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3941370 ']' 00:19:22.234 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3941370 00:19:22.234 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:22.234 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:22.234 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3941370 00:19:22.234 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:22.234 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:22.234 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3941370' 00:19:22.234 killing process with pid 3941370 00:19:22.234 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3941370 00:19:22.234 [2024-07-15 18:34:07.602833] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:22.234 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3941370 00:19:22.493 18:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:22.493 18:34:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:22.493 18:34:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:22.493 18:34:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:22.493 18:34:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:22.493 18:34:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:19:22.493 18:34:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:22.493 18:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:22.493 18:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:19:22.493 18:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.kayOMvzTgg 00:19:22.493 18:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:22.493 18:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.kayOMvzTgg 00:19:22.493 18:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:22.493 18:34:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:22.493 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:22.493 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.493 18:34:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3946534 00:19:22.493 18:34:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:22.493 18:34:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3946534 00:19:22.493 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3946534 ']' 00:19:22.493 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.493 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:22.494 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.494 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:22.494 18:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.494 [2024-07-15 18:34:07.899920] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:19:22.494 [2024-07-15 18:34:07.899963] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.494 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.494 [2024-07-15 18:34:07.964342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.494 [2024-07-15 18:34:08.034821] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:22.494 [2024-07-15 18:34:08.034860] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:22.494 [2024-07-15 18:34:08.034867] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:22.494 [2024-07-15 18:34:08.034873] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:22.494 [2024-07-15 18:34:08.034878] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:22.494 [2024-07-15 18:34:08.034898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.430 18:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:23.430 18:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:23.430 18:34:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:23.430 18:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:23.430 18:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.430 18:34:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:23.430 18:34:08 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.kayOMvzTgg 00:19:23.430 18:34:08 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.kayOMvzTgg 00:19:23.430 18:34:08 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:23.430 [2024-07-15 18:34:08.889451] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:23.430 18:34:08 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:23.688 18:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:23.946 [2024-07-15 18:34:09.258386] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:23.946 [2024-07-15 18:34:09.258566] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:23.946 18:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:23.946 malloc0 00:19:23.946 18:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:24.205 18:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kayOMvzTgg 00:19:24.463 [2024-07-15 18:34:09.787913] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:24.463 18:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kayOMvzTgg 00:19:24.463 18:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:24.463 18:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:24.463 18:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:24.463 18:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.kayOMvzTgg' 00:19:24.463 18:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:24.463 18:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:24.463 18:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3946797 00:19:24.463 18:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:24.463 18:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3946797 /var/tmp/bdevperf.sock 00:19:24.463 18:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3946797 ']' 00:19:24.463 18:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:24.463 18:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:24.463 18:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:24.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:24.463 18:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:24.463 18:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.463 [2024-07-15 18:34:09.848625] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:19:24.463 [2024-07-15 18:34:09.848670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3946797 ] 00:19:24.463 EAL: No free 2048 kB hugepages reported on node 1 00:19:24.463 [2024-07-15 18:34:09.912747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.463 [2024-07-15 18:34:09.983655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.399 18:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:25.399 18:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:25.399 18:34:10 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kayOMvzTgg 00:19:25.399 [2024-07-15 18:34:10.812677] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:25.399 [2024-07-15 18:34:10.812759] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:25.399 TLSTESTn1 00:19:25.399 18:34:10 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:25.658 Running I/O for 10 seconds... 00:19:35.634 00:19:35.634 Latency(us) 00:19:35.634 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.634 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:35.634 Verification LBA range: start 0x0 length 0x2000 00:19:35.634 TLSTESTn1 : 10.02 5286.16 20.65 0.00 0.00 24178.12 6272.73 33204.91 00:19:35.634 =================================================================================================================== 00:19:35.634 Total : 5286.16 20.65 0.00 0.00 24178.12 6272.73 33204.91 00:19:35.634 0 00:19:35.634 18:34:21 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:35.634 18:34:21 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3946797 00:19:35.634 18:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3946797 ']' 00:19:35.634 18:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3946797 00:19:35.634 18:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:35.634 18:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:35.634 18:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3946797 00:19:35.634 18:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:35.634 18:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:35.634 18:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3946797' 00:19:35.635 killing process with pid 3946797 00:19:35.635 18:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3946797 00:19:35.635 Received shutdown signal, test time was about 10.000000 seconds 00:19:35.635 00:19:35.635 Latency(us) 00:19:35.635 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.635 =================================================================================================================== 00:19:35.635 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:35.635 [2024-07-15 18:34:21.096835] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:35.635 18:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3946797 00:19:35.893 18:34:21 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.kayOMvzTgg 00:19:35.893 18:34:21 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kayOMvzTgg 00:19:35.893 18:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:35.893 18:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kayOMvzTgg 00:19:35.893 18:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:35.893 18:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:35.893 18:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:35.893 18:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:35.893 18:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kayOMvzTgg 00:19:35.893 18:34:21 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:35.893 18:34:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:35.893 18:34:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:35.893 18:34:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.kayOMvzTgg' 00:19:35.893 18:34:21 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:35.893 18:34:21 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3948634 00:19:35.893 18:34:21 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:35.893 18:34:21 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:35.893 18:34:21 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3948634 /var/tmp/bdevperf.sock 00:19:35.893 18:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3948634 ']' 00:19:35.893 18:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:35.893 18:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:35.893 18:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:35.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:35.893 18:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:35.893 18:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.893 [2024-07-15 18:34:21.327382] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:19:35.893 [2024-07-15 18:34:21.327428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3948634 ] 00:19:35.893 EAL: No free 2048 kB hugepages reported on node 1 00:19:35.893 [2024-07-15 18:34:21.392868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.152 [2024-07-15 18:34:21.460377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.719 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:36.719 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:36.719 18:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kayOMvzTgg 00:19:36.977 [2024-07-15 18:34:22.278586] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:36.977 [2024-07-15 18:34:22.278634] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:36.977 [2024-07-15 18:34:22.278641] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.kayOMvzTgg 00:19:36.977 request: 00:19:36.977 { 00:19:36.977 "name": "TLSTEST", 00:19:36.977 "trtype": "tcp", 00:19:36.977 "traddr": "10.0.0.2", 00:19:36.977 "adrfam": "ipv4", 00:19:36.977 "trsvcid": "4420", 00:19:36.977 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:36.977 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:36.977 "prchk_reftag": false, 00:19:36.977 "prchk_guard": false, 00:19:36.977 "hdgst": false, 00:19:36.977 "ddgst": false, 00:19:36.977 "psk": "/tmp/tmp.kayOMvzTgg", 00:19:36.977 "method": "bdev_nvme_attach_controller", 00:19:36.977 "req_id": 1 00:19:36.977 } 00:19:36.977 Got JSON-RPC error response 00:19:36.977 response: 00:19:36.977 { 00:19:36.977 "code": -1, 00:19:36.977 "message": "Operation not permitted" 00:19:36.977 } 00:19:36.977 18:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3948634 00:19:36.977 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3948634 ']' 00:19:36.977 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3948634 00:19:36.977 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:36.977 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:36.977 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3948634 00:19:36.977 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:36.977 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:36.977 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3948634' 00:19:36.977 killing process with pid 3948634 00:19:36.977 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3948634 00:19:36.977 Received shutdown signal, test time was about 10.000000 seconds 00:19:36.977 00:19:36.977 Latency(us) 00:19:36.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.978 =================================================================================================================== 00:19:36.978 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:36.978 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3948634 00:19:36.978 18:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:36.978 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:36.978 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:36.978 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:36.978 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:36.978 18:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3946534 00:19:36.978 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3946534 ']' 00:19:36.978 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3946534 00:19:36.978 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:36.978 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:36.978 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3946534 00:19:37.237 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:37.237 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:37.237 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3946534' 00:19:37.237 killing process with pid 3946534 00:19:37.237 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3946534 00:19:37.237 [2024-07-15 18:34:22.566791] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:37.237 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3946534 00:19:37.237 18:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:19:37.237 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:37.237 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:37.237 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.237 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3948880 00:19:37.237 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3948880 00:19:37.237 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:37.237 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3948880 ']' 00:19:37.237 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.237 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:37.237 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.237 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:37.237 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.497 [2024-07-15 18:34:22.807414] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:19:37.497 [2024-07-15 18:34:22.807457] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:37.497 EAL: No free 2048 kB hugepages reported on node 1 00:19:37.497 [2024-07-15 18:34:22.876968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.497 [2024-07-15 18:34:22.951716] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:37.497 [2024-07-15 18:34:22.951754] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:37.497 [2024-07-15 18:34:22.951765] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:37.497 [2024-07-15 18:34:22.951771] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:37.497 [2024-07-15 18:34:22.951776] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:37.497 [2024-07-15 18:34:22.951793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:38.064 18:34:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:38.064 18:34:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:38.064 18:34:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:38.064 18:34:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:38.064 18:34:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.324 18:34:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:38.324 18:34:23 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.kayOMvzTgg 00:19:38.324 18:34:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:38.324 18:34:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.kayOMvzTgg 00:19:38.324 18:34:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:19:38.324 18:34:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:38.324 18:34:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:19:38.324 18:34:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:38.324 18:34:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.kayOMvzTgg 00:19:38.324 18:34:23 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.kayOMvzTgg 00:19:38.324 18:34:23 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:38.324 [2024-07-15 18:34:23.801041] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:38.324 18:34:23 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:38.582 18:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:38.840 [2024-07-15 18:34:24.149926] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:38.840 [2024-07-15 18:34:24.150093] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:38.840 18:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:38.840 malloc0 00:19:38.840 18:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:39.099 18:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kayOMvzTgg 00:19:39.359 [2024-07-15 18:34:24.687188] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:39.359 [2024-07-15 18:34:24.687211] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:39.359 [2024-07-15 18:34:24.687232] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:39.359 request: 00:19:39.359 { 00:19:39.359 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.359 "host": "nqn.2016-06.io.spdk:host1", 00:19:39.359 "psk": "/tmp/tmp.kayOMvzTgg", 00:19:39.359 "method": "nvmf_subsystem_add_host", 00:19:39.359 "req_id": 1 00:19:39.359 } 00:19:39.359 Got JSON-RPC error response 00:19:39.359 response: 00:19:39.359 { 00:19:39.359 "code": -32603, 00:19:39.359 "message": "Internal error" 00:19:39.359 } 00:19:39.359 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:39.359 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:39.359 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:39.359 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:39.359 18:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3948880 00:19:39.359 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3948880 ']' 00:19:39.359 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3948880 00:19:39.359 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:39.359 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:39.359 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3948880 00:19:39.359 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:39.359 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:39.359 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3948880' 00:19:39.359 killing process with pid 3948880 00:19:39.359 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3948880 00:19:39.359 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3948880 00:19:39.618 18:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.kayOMvzTgg 00:19:39.618 18:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:39.618 18:34:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:39.618 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:39.618 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.618 18:34:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3949366 00:19:39.618 18:34:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:39.618 18:34:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3949366 00:19:39.618 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3949366 ']' 00:19:39.618 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.618 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:39.618 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.618 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:39.618 18:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.618 [2024-07-15 18:34:25.013349] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:19:39.618 [2024-07-15 18:34:25.013395] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:39.618 EAL: No free 2048 kB hugepages reported on node 1 00:19:39.618 [2024-07-15 18:34:25.077746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.618 [2024-07-15 18:34:25.154652] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:39.618 [2024-07-15 18:34:25.154687] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:39.618 [2024-07-15 18:34:25.154693] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:39.618 [2024-07-15 18:34:25.154699] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:39.618 [2024-07-15 18:34:25.154704] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:39.618 [2024-07-15 18:34:25.154723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.552 18:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:40.552 18:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:40.552 18:34:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:40.552 18:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:40.552 18:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.552 18:34:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.552 18:34:25 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.kayOMvzTgg 00:19:40.552 18:34:25 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.kayOMvzTgg 00:19:40.552 18:34:25 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:40.552 [2024-07-15 18:34:26.001495] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.552 18:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:40.811 18:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:40.811 [2024-07-15 18:34:26.362426] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:40.811 [2024-07-15 18:34:26.362610] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.069 18:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:41.069 malloc0 00:19:41.069 18:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:41.327 18:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kayOMvzTgg 00:19:41.327 [2024-07-15 18:34:26.867621] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:41.585 18:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3949628 00:19:41.585 18:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:41.585 18:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:41.585 18:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3949628 /var/tmp/bdevperf.sock 00:19:41.585 18:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3949628 ']' 00:19:41.585 18:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:41.585 18:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:41.585 18:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:41.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:41.585 18:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:41.585 18:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.585 [2024-07-15 18:34:26.937888] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:19:41.585 [2024-07-15 18:34:26.937931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3949628 ] 00:19:41.585 EAL: No free 2048 kB hugepages reported on node 1 00:19:41.585 [2024-07-15 18:34:27.006986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.585 [2024-07-15 18:34:27.078934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:42.520 18:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:42.520 18:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:42.520 18:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kayOMvzTgg 00:19:42.520 [2024-07-15 18:34:27.897283] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:42.520 [2024-07-15 18:34:27.897368] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:42.520 TLSTESTn1 00:19:42.520 18:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:42.778 18:34:28 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:19:42.778 "subsystems": [ 00:19:42.778 { 00:19:42.778 "subsystem": "keyring", 00:19:42.778 "config": [] 00:19:42.778 }, 00:19:42.778 { 00:19:42.778 "subsystem": "iobuf", 00:19:42.778 "config": [ 00:19:42.778 { 00:19:42.778 "method": "iobuf_set_options", 00:19:42.779 "params": { 00:19:42.779 "small_pool_count": 8192, 00:19:42.779 "large_pool_count": 1024, 00:19:42.779 "small_bufsize": 8192, 00:19:42.779 "large_bufsize": 135168 00:19:42.779 } 00:19:42.779 } 00:19:42.779 ] 00:19:42.779 }, 00:19:42.779 { 00:19:42.779 "subsystem": "sock", 00:19:42.779 "config": [ 00:19:42.779 { 00:19:42.779 "method": "sock_set_default_impl", 00:19:42.779 "params": { 00:19:42.779 "impl_name": "posix" 00:19:42.779 } 00:19:42.779 }, 00:19:42.779 { 00:19:42.779 "method": "sock_impl_set_options", 00:19:42.779 "params": { 00:19:42.779 "impl_name": "ssl", 00:19:42.779 "recv_buf_size": 4096, 00:19:42.779 "send_buf_size": 4096, 00:19:42.779 "enable_recv_pipe": true, 00:19:42.779 "enable_quickack": false, 00:19:42.779 "enable_placement_id": 0, 00:19:42.779 "enable_zerocopy_send_server": true, 00:19:42.779 "enable_zerocopy_send_client": false, 00:19:42.779 "zerocopy_threshold": 0, 00:19:42.779 "tls_version": 0, 00:19:42.779 "enable_ktls": false 00:19:42.779 } 00:19:42.779 }, 00:19:42.779 { 00:19:42.779 "method": "sock_impl_set_options", 00:19:42.779 "params": { 00:19:42.779 "impl_name": "posix", 00:19:42.779 "recv_buf_size": 2097152, 00:19:42.779 "send_buf_size": 2097152, 00:19:42.779 "enable_recv_pipe": true, 00:19:42.779 "enable_quickack": false, 00:19:42.779 "enable_placement_id": 0, 00:19:42.779 "enable_zerocopy_send_server": true, 00:19:42.779 "enable_zerocopy_send_client": false, 00:19:42.779 "zerocopy_threshold": 0, 00:19:42.779 "tls_version": 0, 00:19:42.779 "enable_ktls": false 00:19:42.779 } 00:19:42.779 } 00:19:42.779 ] 00:19:42.779 }, 00:19:42.779 { 00:19:42.779 "subsystem": "vmd", 00:19:42.779 "config": [] 00:19:42.779 }, 00:19:42.779 { 00:19:42.779 "subsystem": "accel", 00:19:42.779 "config": [ 00:19:42.779 { 00:19:42.779 "method": "accel_set_options", 00:19:42.779 "params": { 00:19:42.779 "small_cache_size": 128, 00:19:42.779 "large_cache_size": 16, 00:19:42.779 "task_count": 2048, 00:19:42.779 "sequence_count": 2048, 00:19:42.779 "buf_count": 2048 00:19:42.779 } 00:19:42.779 } 00:19:42.779 ] 00:19:42.779 }, 00:19:42.779 { 00:19:42.779 "subsystem": "bdev", 00:19:42.779 "config": [ 00:19:42.779 { 00:19:42.779 "method": "bdev_set_options", 00:19:42.779 "params": { 00:19:42.779 "bdev_io_pool_size": 65535, 00:19:42.779 "bdev_io_cache_size": 256, 00:19:42.779 "bdev_auto_examine": true, 00:19:42.779 "iobuf_small_cache_size": 128, 00:19:42.779 "iobuf_large_cache_size": 16 00:19:42.779 } 00:19:42.779 }, 00:19:42.779 { 00:19:42.779 "method": "bdev_raid_set_options", 00:19:42.779 "params": { 00:19:42.779 "process_window_size_kb": 1024 00:19:42.779 } 00:19:42.779 }, 00:19:42.779 { 00:19:42.779 "method": "bdev_iscsi_set_options", 00:19:42.779 "params": { 00:19:42.779 "timeout_sec": 30 00:19:42.779 } 00:19:42.779 }, 00:19:42.779 { 00:19:42.779 "method": "bdev_nvme_set_options", 00:19:42.779 "params": { 00:19:42.779 "action_on_timeout": "none", 00:19:42.779 "timeout_us": 0, 00:19:42.779 "timeout_admin_us": 0, 00:19:42.779 "keep_alive_timeout_ms": 10000, 00:19:42.779 "arbitration_burst": 0, 00:19:42.779 "low_priority_weight": 0, 00:19:42.779 "medium_priority_weight": 0, 00:19:42.779 "high_priority_weight": 0, 00:19:42.779 "nvme_adminq_poll_period_us": 10000, 00:19:42.779 "nvme_ioq_poll_period_us": 0, 00:19:42.779 "io_queue_requests": 0, 00:19:42.779 "delay_cmd_submit": true, 00:19:42.779 "transport_retry_count": 4, 00:19:42.779 "bdev_retry_count": 3, 00:19:42.779 "transport_ack_timeout": 0, 00:19:42.779 "ctrlr_loss_timeout_sec": 0, 00:19:42.779 "reconnect_delay_sec": 0, 00:19:42.779 "fast_io_fail_timeout_sec": 0, 00:19:42.779 "disable_auto_failback": false, 00:19:42.779 "generate_uuids": false, 00:19:42.779 "transport_tos": 0, 00:19:42.779 "nvme_error_stat": false, 00:19:42.779 "rdma_srq_size": 0, 00:19:42.779 "io_path_stat": false, 00:19:42.779 "allow_accel_sequence": false, 00:19:42.779 "rdma_max_cq_size": 0, 00:19:42.779 "rdma_cm_event_timeout_ms": 0, 00:19:42.779 "dhchap_digests": [ 00:19:42.779 "sha256", 00:19:42.779 "sha384", 00:19:42.779 "sha512" 00:19:42.779 ], 00:19:42.779 "dhchap_dhgroups": [ 00:19:42.779 "null", 00:19:42.779 "ffdhe2048", 00:19:42.779 "ffdhe3072", 00:19:42.779 "ffdhe4096", 00:19:42.779 "ffdhe6144", 00:19:42.779 "ffdhe8192" 00:19:42.779 ] 00:19:42.779 } 00:19:42.779 }, 00:19:42.779 { 00:19:42.779 "method": "bdev_nvme_set_hotplug", 00:19:42.779 "params": { 00:19:42.779 "period_us": 100000, 00:19:42.779 "enable": false 00:19:42.779 } 00:19:42.779 }, 00:19:42.779 { 00:19:42.779 "method": "bdev_malloc_create", 00:19:42.779 "params": { 00:19:42.779 "name": "malloc0", 00:19:42.779 "num_blocks": 8192, 00:19:42.779 "block_size": 4096, 00:19:42.779 "physical_block_size": 4096, 00:19:42.780 "uuid": "a71869d8-55d5-4b8a-b85e-220a85f57bad", 00:19:42.780 "optimal_io_boundary": 0 00:19:42.780 } 00:19:42.780 }, 00:19:42.780 { 00:19:42.780 "method": "bdev_wait_for_examine" 00:19:42.780 } 00:19:42.780 ] 00:19:42.780 }, 00:19:42.780 { 00:19:42.780 "subsystem": "nbd", 00:19:42.780 "config": [] 00:19:42.780 }, 00:19:42.780 { 00:19:42.780 "subsystem": "scheduler", 00:19:42.780 "config": [ 00:19:42.780 { 00:19:42.780 "method": "framework_set_scheduler", 00:19:42.780 "params": { 00:19:42.780 "name": "static" 00:19:42.780 } 00:19:42.780 } 00:19:42.780 ] 00:19:42.780 }, 00:19:42.780 { 00:19:42.780 "subsystem": "nvmf", 00:19:42.780 "config": [ 00:19:42.780 { 00:19:42.780 "method": "nvmf_set_config", 00:19:42.780 "params": { 00:19:42.780 "discovery_filter": "match_any", 00:19:42.780 "admin_cmd_passthru": { 00:19:42.780 "identify_ctrlr": false 00:19:42.780 } 00:19:42.780 } 00:19:42.780 }, 00:19:42.780 { 00:19:42.780 "method": "nvmf_set_max_subsystems", 00:19:42.780 "params": { 00:19:42.780 "max_subsystems": 1024 00:19:42.780 } 00:19:42.780 }, 00:19:42.780 { 00:19:42.780 "method": "nvmf_set_crdt", 00:19:42.780 "params": { 00:19:42.780 "crdt1": 0, 00:19:42.780 "crdt2": 0, 00:19:42.780 "crdt3": 0 00:19:42.780 } 00:19:42.780 }, 00:19:42.780 { 00:19:42.780 "method": "nvmf_create_transport", 00:19:42.780 "params": { 00:19:42.780 "trtype": "TCP", 00:19:42.780 "max_queue_depth": 128, 00:19:42.780 "max_io_qpairs_per_ctrlr": 127, 00:19:42.780 "in_capsule_data_size": 4096, 00:19:42.780 "max_io_size": 131072, 00:19:42.780 "io_unit_size": 131072, 00:19:42.780 "max_aq_depth": 128, 00:19:42.780 "num_shared_buffers": 511, 00:19:42.780 "buf_cache_size": 4294967295, 00:19:42.780 "dif_insert_or_strip": false, 00:19:42.780 "zcopy": false, 00:19:42.780 "c2h_success": false, 00:19:42.780 "sock_priority": 0, 00:19:42.780 "abort_timeout_sec": 1, 00:19:42.780 "ack_timeout": 0, 00:19:42.780 "data_wr_pool_size": 0 00:19:42.780 } 00:19:42.780 }, 00:19:42.780 { 00:19:42.780 "method": "nvmf_create_subsystem", 00:19:42.780 "params": { 00:19:42.780 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.780 "allow_any_host": false, 00:19:42.780 "serial_number": "SPDK00000000000001", 00:19:42.780 "model_number": "SPDK bdev Controller", 00:19:42.780 "max_namespaces": 10, 00:19:42.780 "min_cntlid": 1, 00:19:42.780 "max_cntlid": 65519, 00:19:42.780 "ana_reporting": false 00:19:42.780 } 00:19:42.780 }, 00:19:42.780 { 00:19:42.780 "method": "nvmf_subsystem_add_host", 00:19:42.780 "params": { 00:19:42.780 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.780 "host": "nqn.2016-06.io.spdk:host1", 00:19:42.780 "psk": "/tmp/tmp.kayOMvzTgg" 00:19:42.780 } 00:19:42.780 }, 00:19:42.780 { 00:19:42.780 "method": "nvmf_subsystem_add_ns", 00:19:42.780 "params": { 00:19:42.780 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.780 "namespace": { 00:19:42.780 "nsid": 1, 00:19:42.780 "bdev_name": "malloc0", 00:19:42.780 "nguid": "A71869D855D54B8AB85E220A85F57BAD", 00:19:42.780 "uuid": "a71869d8-55d5-4b8a-b85e-220a85f57bad", 00:19:42.780 "no_auto_visible": false 00:19:42.780 } 00:19:42.780 } 00:19:42.780 }, 00:19:42.780 { 00:19:42.780 "method": "nvmf_subsystem_add_listener", 00:19:42.780 "params": { 00:19:42.780 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.780 "listen_address": { 00:19:42.780 "trtype": "TCP", 00:19:42.780 "adrfam": "IPv4", 00:19:42.780 "traddr": "10.0.0.2", 00:19:42.780 "trsvcid": "4420" 00:19:42.780 }, 00:19:42.780 "secure_channel": true 00:19:42.780 } 00:19:42.780 } 00:19:42.780 ] 00:19:42.780 } 00:19:42.780 ] 00:19:42.780 }' 00:19:42.780 18:34:28 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:43.039 18:34:28 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:19:43.039 "subsystems": [ 00:19:43.039 { 00:19:43.039 "subsystem": "keyring", 00:19:43.039 "config": [] 00:19:43.039 }, 00:19:43.039 { 00:19:43.039 "subsystem": "iobuf", 00:19:43.039 "config": [ 00:19:43.039 { 00:19:43.039 "method": "iobuf_set_options", 00:19:43.039 "params": { 00:19:43.039 "small_pool_count": 8192, 00:19:43.039 "large_pool_count": 1024, 00:19:43.039 "small_bufsize": 8192, 00:19:43.039 "large_bufsize": 135168 00:19:43.039 } 00:19:43.039 } 00:19:43.039 ] 00:19:43.039 }, 00:19:43.039 { 00:19:43.039 "subsystem": "sock", 00:19:43.039 "config": [ 00:19:43.039 { 00:19:43.039 "method": "sock_set_default_impl", 00:19:43.039 "params": { 00:19:43.039 "impl_name": "posix" 00:19:43.039 } 00:19:43.039 }, 00:19:43.039 { 00:19:43.039 "method": "sock_impl_set_options", 00:19:43.039 "params": { 00:19:43.039 "impl_name": "ssl", 00:19:43.039 "recv_buf_size": 4096, 00:19:43.039 "send_buf_size": 4096, 00:19:43.039 "enable_recv_pipe": true, 00:19:43.039 "enable_quickack": false, 00:19:43.039 "enable_placement_id": 0, 00:19:43.039 "enable_zerocopy_send_server": true, 00:19:43.039 "enable_zerocopy_send_client": false, 00:19:43.039 "zerocopy_threshold": 0, 00:19:43.039 "tls_version": 0, 00:19:43.039 "enable_ktls": false 00:19:43.039 } 00:19:43.039 }, 00:19:43.039 { 00:19:43.039 "method": "sock_impl_set_options", 00:19:43.039 "params": { 00:19:43.039 "impl_name": "posix", 00:19:43.039 "recv_buf_size": 2097152, 00:19:43.039 "send_buf_size": 2097152, 00:19:43.039 "enable_recv_pipe": true, 00:19:43.039 "enable_quickack": false, 00:19:43.039 "enable_placement_id": 0, 00:19:43.039 "enable_zerocopy_send_server": true, 00:19:43.039 "enable_zerocopy_send_client": false, 00:19:43.040 "zerocopy_threshold": 0, 00:19:43.040 "tls_version": 0, 00:19:43.040 "enable_ktls": false 00:19:43.040 } 00:19:43.040 } 00:19:43.040 ] 00:19:43.040 }, 00:19:43.040 { 00:19:43.040 "subsystem": "vmd", 00:19:43.040 "config": [] 00:19:43.040 }, 00:19:43.040 { 00:19:43.040 "subsystem": "accel", 00:19:43.040 "config": [ 00:19:43.040 { 00:19:43.040 "method": "accel_set_options", 00:19:43.040 "params": { 00:19:43.040 "small_cache_size": 128, 00:19:43.040 "large_cache_size": 16, 00:19:43.040 "task_count": 2048, 00:19:43.040 "sequence_count": 2048, 00:19:43.040 "buf_count": 2048 00:19:43.040 } 00:19:43.040 } 00:19:43.040 ] 00:19:43.040 }, 00:19:43.040 { 00:19:43.040 "subsystem": "bdev", 00:19:43.040 "config": [ 00:19:43.040 { 00:19:43.040 "method": "bdev_set_options", 00:19:43.040 "params": { 00:19:43.040 "bdev_io_pool_size": 65535, 00:19:43.040 "bdev_io_cache_size": 256, 00:19:43.040 "bdev_auto_examine": true, 00:19:43.040 "iobuf_small_cache_size": 128, 00:19:43.040 "iobuf_large_cache_size": 16 00:19:43.040 } 00:19:43.040 }, 00:19:43.040 { 00:19:43.040 "method": "bdev_raid_set_options", 00:19:43.040 "params": { 00:19:43.040 "process_window_size_kb": 1024 00:19:43.040 } 00:19:43.040 }, 00:19:43.040 { 00:19:43.040 "method": "bdev_iscsi_set_options", 00:19:43.040 "params": { 00:19:43.040 "timeout_sec": 30 00:19:43.040 } 00:19:43.040 }, 00:19:43.040 { 00:19:43.040 "method": "bdev_nvme_set_options", 00:19:43.040 "params": { 00:19:43.040 "action_on_timeout": "none", 00:19:43.040 "timeout_us": 0, 00:19:43.040 "timeout_admin_us": 0, 00:19:43.040 "keep_alive_timeout_ms": 10000, 00:19:43.040 "arbitration_burst": 0, 00:19:43.040 "low_priority_weight": 0, 00:19:43.040 "medium_priority_weight": 0, 00:19:43.040 "high_priority_weight": 0, 00:19:43.040 "nvme_adminq_poll_period_us": 10000, 00:19:43.040 "nvme_ioq_poll_period_us": 0, 00:19:43.040 "io_queue_requests": 512, 00:19:43.040 "delay_cmd_submit": true, 00:19:43.040 "transport_retry_count": 4, 00:19:43.040 "bdev_retry_count": 3, 00:19:43.040 "transport_ack_timeout": 0, 00:19:43.040 "ctrlr_loss_timeout_sec": 0, 00:19:43.040 "reconnect_delay_sec": 0, 00:19:43.040 "fast_io_fail_timeout_sec": 0, 00:19:43.040 "disable_auto_failback": false, 00:19:43.040 "generate_uuids": false, 00:19:43.040 "transport_tos": 0, 00:19:43.040 "nvme_error_stat": false, 00:19:43.040 "rdma_srq_size": 0, 00:19:43.040 "io_path_stat": false, 00:19:43.040 "allow_accel_sequence": false, 00:19:43.040 "rdma_max_cq_size": 0, 00:19:43.040 "rdma_cm_event_timeout_ms": 0, 00:19:43.040 "dhchap_digests": [ 00:19:43.040 "sha256", 00:19:43.040 "sha384", 00:19:43.040 "sha512" 00:19:43.040 ], 00:19:43.040 "dhchap_dhgroups": [ 00:19:43.040 "null", 00:19:43.040 "ffdhe2048", 00:19:43.040 "ffdhe3072", 00:19:43.040 "ffdhe4096", 00:19:43.040 "ffdhe6144", 00:19:43.040 "ffdhe8192" 00:19:43.040 ] 00:19:43.040 } 00:19:43.040 }, 00:19:43.040 { 00:19:43.040 "method": "bdev_nvme_attach_controller", 00:19:43.040 "params": { 00:19:43.040 "name": "TLSTEST", 00:19:43.040 "trtype": "TCP", 00:19:43.040 "adrfam": "IPv4", 00:19:43.040 "traddr": "10.0.0.2", 00:19:43.040 "trsvcid": "4420", 00:19:43.040 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.040 "prchk_reftag": false, 00:19:43.040 "prchk_guard": false, 00:19:43.040 "ctrlr_loss_timeout_sec": 0, 00:19:43.040 "reconnect_delay_sec": 0, 00:19:43.040 "fast_io_fail_timeout_sec": 0, 00:19:43.040 "psk": "/tmp/tmp.kayOMvzTgg", 00:19:43.040 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:43.040 "hdgst": false, 00:19:43.040 "ddgst": false 00:19:43.040 } 00:19:43.040 }, 00:19:43.040 { 00:19:43.040 "method": "bdev_nvme_set_hotplug", 00:19:43.040 "params": { 00:19:43.040 "period_us": 100000, 00:19:43.040 "enable": false 00:19:43.040 } 00:19:43.040 }, 00:19:43.040 { 00:19:43.040 "method": "bdev_wait_for_examine" 00:19:43.040 } 00:19:43.040 ] 00:19:43.040 }, 00:19:43.040 { 00:19:43.040 "subsystem": "nbd", 00:19:43.040 "config": [] 00:19:43.040 } 00:19:43.040 ] 00:19:43.040 }' 00:19:43.040 18:34:28 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3949628 00:19:43.040 18:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3949628 ']' 00:19:43.040 18:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3949628 00:19:43.040 18:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:43.040 18:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:43.040 18:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3949628 00:19:43.040 18:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:43.040 18:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:43.040 18:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3949628' 00:19:43.040 killing process with pid 3949628 00:19:43.040 18:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3949628 00:19:43.040 Received shutdown signal, test time was about 10.000000 seconds 00:19:43.040 00:19:43.040 Latency(us) 00:19:43.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.040 =================================================================================================================== 00:19:43.040 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:43.040 [2024-07-15 18:34:28.536606] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:43.040 18:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3949628 00:19:43.298 18:34:28 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3949366 00:19:43.298 18:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3949366 ']' 00:19:43.298 18:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3949366 00:19:43.298 18:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:43.298 18:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:43.298 18:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3949366 00:19:43.298 18:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:43.298 18:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:43.298 18:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3949366' 00:19:43.298 killing process with pid 3949366 00:19:43.298 18:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3949366 00:19:43.298 [2024-07-15 18:34:28.761590] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:43.298 18:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3949366 00:19:43.557 18:34:28 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:43.557 18:34:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:43.557 18:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:43.557 18:34:28 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:19:43.557 "subsystems": [ 00:19:43.557 { 00:19:43.557 "subsystem": "keyring", 00:19:43.557 "config": [] 00:19:43.557 }, 00:19:43.557 { 00:19:43.557 "subsystem": "iobuf", 00:19:43.557 "config": [ 00:19:43.557 { 00:19:43.557 "method": "iobuf_set_options", 00:19:43.557 "params": { 00:19:43.557 "small_pool_count": 8192, 00:19:43.557 "large_pool_count": 1024, 00:19:43.557 "small_bufsize": 8192, 00:19:43.557 "large_bufsize": 135168 00:19:43.557 } 00:19:43.557 } 00:19:43.557 ] 00:19:43.557 }, 00:19:43.557 { 00:19:43.557 "subsystem": "sock", 00:19:43.557 "config": [ 00:19:43.557 { 00:19:43.557 "method": "sock_set_default_impl", 00:19:43.557 "params": { 00:19:43.557 "impl_name": "posix" 00:19:43.557 } 00:19:43.557 }, 00:19:43.557 { 00:19:43.557 "method": "sock_impl_set_options", 00:19:43.557 "params": { 00:19:43.557 "impl_name": "ssl", 00:19:43.557 "recv_buf_size": 4096, 00:19:43.557 "send_buf_size": 4096, 00:19:43.557 "enable_recv_pipe": true, 00:19:43.557 "enable_quickack": false, 00:19:43.557 "enable_placement_id": 0, 00:19:43.557 "enable_zerocopy_send_server": true, 00:19:43.557 "enable_zerocopy_send_client": false, 00:19:43.557 "zerocopy_threshold": 0, 00:19:43.557 "tls_version": 0, 00:19:43.557 "enable_ktls": false 00:19:43.557 } 00:19:43.557 }, 00:19:43.557 { 00:19:43.557 "method": "sock_impl_set_options", 00:19:43.557 "params": { 00:19:43.557 "impl_name": "posix", 00:19:43.557 "recv_buf_size": 2097152, 00:19:43.558 "send_buf_size": 2097152, 00:19:43.558 "enable_recv_pipe": true, 00:19:43.558 "enable_quickack": false, 00:19:43.558 "enable_placement_id": 0, 00:19:43.558 "enable_zerocopy_send_server": true, 00:19:43.558 "enable_zerocopy_send_client": false, 00:19:43.558 "zerocopy_threshold": 0, 00:19:43.558 "tls_version": 0, 00:19:43.558 "enable_ktls": false 00:19:43.558 } 00:19:43.558 } 00:19:43.558 ] 00:19:43.558 }, 00:19:43.558 { 00:19:43.558 "subsystem": "vmd", 00:19:43.558 "config": [] 00:19:43.558 }, 00:19:43.558 { 00:19:43.558 "subsystem": "accel", 00:19:43.558 "config": [ 00:19:43.558 { 00:19:43.558 "method": "accel_set_options", 00:19:43.558 "params": { 00:19:43.558 "small_cache_size": 128, 00:19:43.558 "large_cache_size": 16, 00:19:43.558 "task_count": 2048, 00:19:43.558 "sequence_count": 2048, 00:19:43.558 "buf_count": 2048 00:19:43.558 } 00:19:43.558 } 00:19:43.558 ] 00:19:43.558 }, 00:19:43.558 { 00:19:43.558 "subsystem": "bdev", 00:19:43.558 "config": [ 00:19:43.558 { 00:19:43.558 "method": "bdev_set_options", 00:19:43.558 "params": { 00:19:43.558 "bdev_io_pool_size": 65535, 00:19:43.558 "bdev_io_cache_size": 256, 00:19:43.558 "bdev_auto_examine": true, 00:19:43.558 "iobuf_small_cache_size": 128, 00:19:43.558 "iobuf_large_cache_size": 16 00:19:43.558 } 00:19:43.558 }, 00:19:43.558 { 00:19:43.558 "method": "bdev_raid_set_options", 00:19:43.558 "params": { 00:19:43.558 "process_window_size_kb": 1024 00:19:43.558 } 00:19:43.558 }, 00:19:43.558 { 00:19:43.558 "method": "bdev_iscsi_set_options", 00:19:43.558 "params": { 00:19:43.558 "timeout_sec": 30 00:19:43.558 } 00:19:43.558 }, 00:19:43.558 { 00:19:43.558 "method": "bdev_nvme_set_options", 00:19:43.558 "params": { 00:19:43.558 "action_on_timeout": "none", 00:19:43.558 "timeout_us": 0, 00:19:43.558 "timeout_admin_us": 0, 00:19:43.558 "keep_alive_timeout_ms": 10000, 00:19:43.558 "arbitration_burst": 0, 00:19:43.558 "low_priority_weight": 0, 00:19:43.558 "medium_priority_weight": 0, 00:19:43.558 "high_priority_weight": 0, 00:19:43.558 "nvme_adminq_poll_period_us": 10000, 00:19:43.558 "nvme_ioq_poll_period_us": 0, 00:19:43.558 "io_queue_requests": 0, 00:19:43.558 "delay_cmd_submit": true, 00:19:43.558 "transport_retry_count": 4, 00:19:43.558 "bdev_retry_count": 3, 00:19:43.558 "transport_ack_timeout": 0, 00:19:43.558 "ctrlr_loss_timeout_sec": 0, 00:19:43.558 "reconnect_delay_sec": 0, 00:19:43.558 "fast_io_fail_timeout_sec": 0, 00:19:43.558 "disable_auto_failback": false, 00:19:43.558 "generate_uuids": false, 00:19:43.558 "transport_tos": 0, 00:19:43.558 "nvme_error_stat": false, 00:19:43.558 "rdma_srq_size": 0, 00:19:43.558 "io_path_stat": false, 00:19:43.558 "allow_accel_sequence": false, 00:19:43.558 "rdma_max_cq_size": 0, 00:19:43.558 "rdma_cm_event_timeout_ms": 0, 00:19:43.558 "dhchap_digests": [ 00:19:43.558 "sha256", 00:19:43.558 "sha384", 00:19:43.558 "sha512" 00:19:43.558 ], 00:19:43.558 "dhchap_dhgroups": [ 00:19:43.558 "null", 00:19:43.558 "ffdhe2048", 00:19:43.558 "ffdhe3072", 00:19:43.558 "ffdhe4096", 00:19:43.558 "ffdhe6144", 00:19:43.558 "ffdhe8192" 00:19:43.558 ] 00:19:43.558 } 00:19:43.558 }, 00:19:43.558 { 00:19:43.558 "method": "bdev_nvme_set_hotplug", 00:19:43.558 "params": { 00:19:43.558 "period_us": 100000, 00:19:43.558 "enable": false 00:19:43.558 } 00:19:43.558 }, 00:19:43.558 { 00:19:43.558 "method": "bdev_malloc_create", 00:19:43.558 "params": { 00:19:43.558 "name": "malloc0", 00:19:43.558 "num_blocks": 8192, 00:19:43.558 "block_size": 4096, 00:19:43.558 "physical_block_size": 4096, 00:19:43.558 "uuid": "a71869d8-55d5-4b8a-b85e-220a85f57bad", 00:19:43.558 "optimal_io_boundary": 0 00:19:43.558 } 00:19:43.558 }, 00:19:43.558 { 00:19:43.558 "method": "bdev_wait_for_examine" 00:19:43.558 } 00:19:43.558 ] 00:19:43.558 }, 00:19:43.558 { 00:19:43.558 "subsystem": "nbd", 00:19:43.558 "config": [] 00:19:43.558 }, 00:19:43.558 { 00:19:43.558 "subsystem": "scheduler", 00:19:43.558 "config": [ 00:19:43.558 { 00:19:43.558 "method": "framework_set_scheduler", 00:19:43.558 "params": { 00:19:43.558 "name": "static" 00:19:43.558 } 00:19:43.558 } 00:19:43.558 ] 00:19:43.558 }, 00:19:43.558 { 00:19:43.558 "subsystem": "nvmf", 00:19:43.558 "config": [ 00:19:43.558 { 00:19:43.558 "method": "nvmf_set_config", 00:19:43.558 "params": { 00:19:43.558 "discovery_filter": "match_any", 00:19:43.558 "admin_cmd_passthru": { 00:19:43.559 "identify_ctrlr": false 00:19:43.559 } 00:19:43.559 } 00:19:43.559 }, 00:19:43.559 { 00:19:43.559 "method": "nvmf_set_max_subsystems", 00:19:43.559 "params": { 00:19:43.559 "max_subsystems": 1024 00:19:43.559 } 00:19:43.559 }, 00:19:43.559 { 00:19:43.559 "method": "nvmf_set_crdt", 00:19:43.559 "params": { 00:19:43.559 "crdt1": 0, 00:19:43.559 "crdt2": 0, 00:19:43.559 "crdt3": 0 00:19:43.559 } 00:19:43.559 }, 00:19:43.559 { 00:19:43.559 "method": "nvmf_create_transport", 00:19:43.559 "params": { 00:19:43.559 "trtype": "TCP", 00:19:43.559 "max_queue_depth": 128, 00:19:43.559 "max_io_qpairs_per_ctrlr": 127, 00:19:43.559 "in_capsule_data_size": 4096, 00:19:43.559 "max_io_size": 131072, 00:19:43.559 "io_unit_size": 131072, 00:19:43.559 "max_aq_depth": 128, 00:19:43.559 "num_shared_buffers": 511, 00:19:43.559 "buf_cache_size": 4294967295, 00:19:43.559 "dif_insert_or_strip": false, 00:19:43.559 "zcopy": false, 00:19:43.559 "c2h_success": false, 00:19:43.559 "sock_priority": 0, 00:19:43.559 "abort_timeout_sec": 1, 00:19:43.559 "ack_timeout": 0, 00:19:43.559 "data_wr_pool_size": 0 00:19:43.559 } 00:19:43.559 }, 00:19:43.559 { 00:19:43.559 "method": "nvmf_create_subsystem", 00:19:43.559 "params": { 00:19:43.559 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.559 "allow_any_host": false, 00:19:43.559 "serial_number": "SPDK00000000000001", 00:19:43.559 "model_number": "SPDK bdev Controller", 00:19:43.559 "max_namespaces": 10, 00:19:43.559 "min_cntlid": 1, 00:19:43.559 "max_cntlid": 65519, 00:19:43.559 "ana_reporting": false 00:19:43.559 } 00:19:43.559 }, 00:19:43.559 { 00:19:43.559 "method": "nvmf_subsystem_add_host", 00:19:43.559 "params": { 00:19:43.559 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.559 "host": "nqn.2016-06.io.spdk:host1", 00:19:43.559 "psk": "/tmp/tmp.kayOMvzTgg" 00:19:43.559 } 00:19:43.559 }, 00:19:43.559 { 00:19:43.559 "method": "nvmf_subsystem_add_ns", 00:19:43.559 "params": { 00:19:43.559 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.559 "namespace": { 00:19:43.559 "nsid": 1, 00:19:43.559 "bdev_name": "malloc0", 00:19:43.559 "nguid": "A71869D855D54B8AB85E220A85F57BAD", 00:19:43.559 "uuid": "a71869d8-55d5-4b8a-b85e-220a85f57bad", 00:19:43.559 "no_auto_visible": false 00:19:43.559 } 00:19:43.559 } 00:19:43.559 }, 00:19:43.559 { 00:19:43.559 "method": "nvmf_subsystem_add_listener", 00:19:43.559 "params": { 00:19:43.559 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.559 "listen_address": { 00:19:43.559 "trtype": "TCP", 00:19:43.559 "adrfam": "IPv4", 00:19:43.559 "traddr": "10.0.0.2", 00:19:43.559 "trsvcid": "4420" 00:19:43.559 }, 00:19:43.559 "secure_channel": true 00:19:43.559 } 00:19:43.559 } 00:19:43.559 ] 00:19:43.559 } 00:19:43.559 ] 00:19:43.559 }' 00:19:43.559 18:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.559 18:34:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3950035 00:19:43.559 18:34:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:43.559 18:34:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3950035 00:19:43.559 18:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3950035 ']' 00:19:43.559 18:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.559 18:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:43.559 18:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.559 18:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:43.559 18:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.559 [2024-07-15 18:34:29.010422] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:19:43.559 [2024-07-15 18:34:29.010470] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:43.559 EAL: No free 2048 kB hugepages reported on node 1 00:19:43.559 [2024-07-15 18:34:29.074114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.819 [2024-07-15 18:34:29.151794] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:43.819 [2024-07-15 18:34:29.151826] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:43.819 [2024-07-15 18:34:29.151833] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:43.819 [2024-07-15 18:34:29.151839] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:43.819 [2024-07-15 18:34:29.151844] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:43.819 [2024-07-15 18:34:29.151890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.819 [2024-07-15 18:34:29.353853] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:43.819 [2024-07-15 18:34:29.369830] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:44.078 [2024-07-15 18:34:29.385878] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:44.078 [2024-07-15 18:34:29.393531] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:44.337 18:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:44.337 18:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:44.337 18:34:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:44.337 18:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:44.337 18:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.338 18:34:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.338 18:34:29 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3950130 00:19:44.338 18:34:29 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3950130 /var/tmp/bdevperf.sock 00:19:44.338 18:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3950130 ']' 00:19:44.338 18:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:44.338 18:34:29 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:44.338 18:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:44.338 18:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:44.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:44.338 18:34:29 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:19:44.338 "subsystems": [ 00:19:44.338 { 00:19:44.338 "subsystem": "keyring", 00:19:44.338 "config": [] 00:19:44.338 }, 00:19:44.338 { 00:19:44.338 "subsystem": "iobuf", 00:19:44.338 "config": [ 00:19:44.338 { 00:19:44.338 "method": "iobuf_set_options", 00:19:44.338 "params": { 00:19:44.338 "small_pool_count": 8192, 00:19:44.338 "large_pool_count": 1024, 00:19:44.338 "small_bufsize": 8192, 00:19:44.338 "large_bufsize": 135168 00:19:44.338 } 00:19:44.338 } 00:19:44.338 ] 00:19:44.338 }, 00:19:44.338 { 00:19:44.338 "subsystem": "sock", 00:19:44.338 "config": [ 00:19:44.338 { 00:19:44.338 "method": "sock_set_default_impl", 00:19:44.338 "params": { 00:19:44.338 "impl_name": "posix" 00:19:44.338 } 00:19:44.338 }, 00:19:44.338 { 00:19:44.338 "method": "sock_impl_set_options", 00:19:44.338 "params": { 00:19:44.338 "impl_name": "ssl", 00:19:44.338 "recv_buf_size": 4096, 00:19:44.338 "send_buf_size": 4096, 00:19:44.338 "enable_recv_pipe": true, 00:19:44.338 "enable_quickack": false, 00:19:44.338 "enable_placement_id": 0, 00:19:44.338 "enable_zerocopy_send_server": true, 00:19:44.338 "enable_zerocopy_send_client": false, 00:19:44.338 "zerocopy_threshold": 0, 00:19:44.338 "tls_version": 0, 00:19:44.338 "enable_ktls": false 00:19:44.338 } 00:19:44.338 }, 00:19:44.338 { 00:19:44.338 "method": "sock_impl_set_options", 00:19:44.338 "params": { 00:19:44.338 "impl_name": "posix", 00:19:44.338 "recv_buf_size": 2097152, 00:19:44.338 "send_buf_size": 2097152, 00:19:44.338 "enable_recv_pipe": true, 00:19:44.338 "enable_quickack": false, 00:19:44.338 "enable_placement_id": 0, 00:19:44.338 "enable_zerocopy_send_server": true, 00:19:44.338 "enable_zerocopy_send_client": false, 00:19:44.338 "zerocopy_threshold": 0, 00:19:44.338 "tls_version": 0, 00:19:44.338 "enable_ktls": false 00:19:44.338 } 00:19:44.338 } 00:19:44.338 ] 00:19:44.338 }, 00:19:44.338 { 00:19:44.338 "subsystem": "vmd", 00:19:44.338 "config": [] 00:19:44.338 }, 00:19:44.338 { 00:19:44.338 "subsystem": "accel", 00:19:44.338 "config": [ 00:19:44.338 { 00:19:44.338 "method": "accel_set_options", 00:19:44.338 "params": { 00:19:44.338 "small_cache_size": 128, 00:19:44.338 "large_cache_size": 16, 00:19:44.338 "task_count": 2048, 00:19:44.338 "sequence_count": 2048, 00:19:44.338 "buf_count": 2048 00:19:44.338 } 00:19:44.338 } 00:19:44.338 ] 00:19:44.338 }, 00:19:44.338 { 00:19:44.338 "subsystem": "bdev", 00:19:44.338 "config": [ 00:19:44.338 { 00:19:44.338 "method": "bdev_set_options", 00:19:44.338 "params": { 00:19:44.338 "bdev_io_pool_size": 65535, 00:19:44.338 "bdev_io_cache_size": 256, 00:19:44.338 "bdev_auto_examine": true, 00:19:44.338 "iobuf_small_cache_size": 128, 00:19:44.338 "iobuf_large_cache_size": 16 00:19:44.338 } 00:19:44.338 }, 00:19:44.338 { 00:19:44.338 "method": "bdev_raid_set_options", 00:19:44.338 "params": { 00:19:44.338 "process_window_size_kb": 1024 00:19:44.338 } 00:19:44.338 }, 00:19:44.338 { 00:19:44.338 "method": "bdev_iscsi_set_options", 00:19:44.338 "params": { 00:19:44.338 "timeout_sec": 30 00:19:44.338 } 00:19:44.338 }, 00:19:44.338 { 00:19:44.338 "method": "bdev_nvme_set_options", 00:19:44.338 "params": { 00:19:44.338 "action_on_timeout": "none", 00:19:44.338 "timeout_us": 0, 00:19:44.338 "timeout_admin_us": 0, 00:19:44.338 "keep_alive_timeout_ms": 10000, 00:19:44.338 "arbitration_burst": 0, 00:19:44.338 "low_priority_weight": 0, 00:19:44.338 "medium_priority_weight": 0, 00:19:44.338 "high_priority_weight": 0, 00:19:44.338 "nvme_adminq_poll_period_us": 10000, 00:19:44.338 "nvme_ioq_poll_period_us": 0, 00:19:44.338 "io_queue_requests": 512, 00:19:44.338 "delay_cmd_submit": true, 00:19:44.338 "transport_retry_count": 4, 00:19:44.338 "bdev_retry_count": 3, 00:19:44.338 "transport_ack_timeout": 0, 00:19:44.338 "ctrlr_loss_timeout_sec": 0, 00:19:44.338 "reconnect_delay_sec": 0, 00:19:44.338 "fast_io_fail_timeout_sec": 0, 00:19:44.338 "disable_auto_failback": false, 00:19:44.338 "generate_uuids": false, 00:19:44.338 "transport_tos": 0, 00:19:44.338 "nvme_error_stat": false, 00:19:44.338 "rdma_srq_size": 0, 00:19:44.338 "io_path_stat": false, 00:19:44.338 "allow_accel_sequence": false, 00:19:44.338 "rdma_max_cq_size": 0, 00:19:44.338 "rdma_cm_event_timeout_ms": 0, 00:19:44.338 "dhchap_digests": [ 00:19:44.338 "sha256", 00:19:44.338 "sha384", 00:19:44.338 "sha512" 00:19:44.338 ], 00:19:44.338 "dhchap_dhgroups": [ 00:19:44.338 "null", 00:19:44.338 "ffdhe2048", 00:19:44.338 "ffdhe3072", 00:19:44.338 "ffdhe4096", 00:19:44.338 "ffdhe6144", 00:19:44.338 "ffdhe8192" 00:19:44.338 ] 00:19:44.338 } 00:19:44.338 }, 00:19:44.338 { 00:19:44.338 "method": "bdev_nvme_attach_controller", 00:19:44.338 "params": { 00:19:44.338 "name": "TLSTEST", 00:19:44.338 "trtype": "TCP", 00:19:44.338 "adrfam": "IPv4", 00:19:44.338 "traddr": "10.0.0.2", 00:19:44.338 "trsvcid": "4420", 00:19:44.338 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.338 "prchk_reftag": false, 00:19:44.338 "prchk_guard": false, 00:19:44.338 "ctrlr_loss_timeout_sec": 0, 00:19:44.338 "reconnect_delay_sec": 0, 00:19:44.338 "fast_io_fail_timeout_sec": 0, 00:19:44.338 "psk": "/tmp/tmp.kayOMvzTgg", 00:19:44.338 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:44.338 "hdgst": false, 00:19:44.338 "ddgst": false 00:19:44.338 } 00:19:44.338 }, 00:19:44.338 { 00:19:44.338 "method": "bdev_nvme_set_hotplug", 00:19:44.338 "params": { 00:19:44.338 "period_us": 100000, 00:19:44.338 "enable": false 00:19:44.338 } 00:19:44.338 }, 00:19:44.338 { 00:19:44.338 "method": "bdev_wait_for_examine" 00:19:44.338 } 00:19:44.338 ] 00:19:44.338 }, 00:19:44.338 { 00:19:44.338 "subsystem": "nbd", 00:19:44.338 "config": [] 00:19:44.338 } 00:19:44.338 ] 00:19:44.338 }' 00:19:44.338 18:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:44.338 18:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.338 [2024-07-15 18:34:29.878575] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:19:44.338 [2024-07-15 18:34:29.878622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3950130 ] 00:19:44.598 EAL: No free 2048 kB hugepages reported on node 1 00:19:44.598 [2024-07-15 18:34:29.946900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.598 [2024-07-15 18:34:30.032348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:44.856 [2024-07-15 18:34:30.174416] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:44.856 [2024-07-15 18:34:30.174491] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:45.426 18:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:45.426 18:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:45.426 18:34:30 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:45.426 Running I/O for 10 seconds... 00:19:55.396 00:19:55.396 Latency(us) 00:19:55.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.396 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:55.396 Verification LBA range: start 0x0 length 0x2000 00:19:55.396 TLSTESTn1 : 10.01 5401.26 21.10 0.00 0.00 23662.84 4681.14 38447.79 00:19:55.396 =================================================================================================================== 00:19:55.396 Total : 5401.26 21.10 0.00 0.00 23662.84 4681.14 38447.79 00:19:55.396 0 00:19:55.396 18:34:40 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:55.396 18:34:40 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3950130 00:19:55.396 18:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3950130 ']' 00:19:55.396 18:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3950130 00:19:55.396 18:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:55.396 18:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:55.396 18:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3950130 00:19:55.396 18:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:55.396 18:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:55.396 18:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3950130' 00:19:55.396 killing process with pid 3950130 00:19:55.396 18:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3950130 00:19:55.396 Received shutdown signal, test time was about 10.000000 seconds 00:19:55.396 00:19:55.396 Latency(us) 00:19:55.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.396 =================================================================================================================== 00:19:55.396 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:55.396 [2024-07-15 18:34:40.872969] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:55.396 18:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3950130 00:19:55.654 18:34:41 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3950035 00:19:55.654 18:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3950035 ']' 00:19:55.654 18:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3950035 00:19:55.654 18:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:55.655 18:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:55.655 18:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3950035 00:19:55.655 18:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:55.655 18:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:55.655 18:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3950035' 00:19:55.655 killing process with pid 3950035 00:19:55.655 18:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3950035 00:19:55.655 [2024-07-15 18:34:41.097602] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:55.655 18:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3950035 00:19:55.913 18:34:41 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:19:55.913 18:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:55.913 18:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:55.913 18:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.913 18:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3951970 00:19:55.913 18:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:55.913 18:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3951970 00:19:55.913 18:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3951970 ']' 00:19:55.913 18:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.913 18:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:55.913 18:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.913 18:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:55.913 18:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.913 [2024-07-15 18:34:41.335523] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:19:55.913 [2024-07-15 18:34:41.335565] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.913 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.913 [2024-07-15 18:34:41.405737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.172 [2024-07-15 18:34:41.482341] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.172 [2024-07-15 18:34:41.482375] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.172 [2024-07-15 18:34:41.482382] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:56.172 [2024-07-15 18:34:41.482387] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:56.172 [2024-07-15 18:34:41.482396] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.172 [2024-07-15 18:34:41.482419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.756 18:34:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:56.756 18:34:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:56.756 18:34:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:56.756 18:34:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:56.756 18:34:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.756 18:34:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.756 18:34:42 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.kayOMvzTgg 00:19:56.756 18:34:42 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.kayOMvzTgg 00:19:56.756 18:34:42 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:57.015 [2024-07-15 18:34:42.327893] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:57.015 18:34:42 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:57.015 18:34:42 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:57.273 [2024-07-15 18:34:42.684795] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:57.273 [2024-07-15 18:34:42.684965] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.273 18:34:42 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:57.531 malloc0 00:19:57.531 18:34:42 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:57.531 18:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kayOMvzTgg 00:19:57.789 [2024-07-15 18:34:43.230306] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:57.789 18:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3952363 00:19:57.789 18:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:57.789 18:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:57.789 18:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3952363 /var/tmp/bdevperf.sock 00:19:57.789 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3952363 ']' 00:19:57.789 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:57.789 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:57.789 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:57.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:57.789 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:57.789 18:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.789 [2024-07-15 18:34:43.304492] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:19:57.789 [2024-07-15 18:34:43.304543] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3952363 ] 00:19:57.789 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.048 [2024-07-15 18:34:43.370241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.048 [2024-07-15 18:34:43.445109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.646 18:34:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:58.646 18:34:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:58.646 18:34:44 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kayOMvzTgg 00:19:58.903 18:34:44 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:58.903 [2024-07-15 18:34:44.430936] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:59.160 nvme0n1 00:19:59.160 18:34:44 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:59.160 Running I/O for 1 seconds... 00:20:00.137 00:20:00.137 Latency(us) 00:20:00.137 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.137 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:00.137 Verification LBA range: start 0x0 length 0x2000 00:20:00.137 nvme0n1 : 1.02 5249.71 20.51 0.00 0.00 24191.82 7052.92 22469.49 00:20:00.137 =================================================================================================================== 00:20:00.137 Total : 5249.71 20.51 0.00 0.00 24191.82 7052.92 22469.49 00:20:00.137 0 00:20:00.137 18:34:45 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3952363 00:20:00.137 18:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3952363 ']' 00:20:00.137 18:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3952363 00:20:00.137 18:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:00.137 18:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:00.137 18:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3952363 00:20:00.415 18:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:00.415 18:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:00.415 18:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3952363' 00:20:00.415 killing process with pid 3952363 00:20:00.415 18:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3952363 00:20:00.415 Received shutdown signal, test time was about 1.000000 seconds 00:20:00.415 00:20:00.415 Latency(us) 00:20:00.415 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.415 =================================================================================================================== 00:20:00.415 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:00.415 18:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3952363 00:20:00.415 18:34:45 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3951970 00:20:00.415 18:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3951970 ']' 00:20:00.415 18:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3951970 00:20:00.415 18:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:00.415 18:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:00.415 18:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3951970 00:20:00.415 18:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:00.415 18:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:00.415 18:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3951970' 00:20:00.415 killing process with pid 3951970 00:20:00.415 18:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3951970 00:20:00.415 [2024-07-15 18:34:45.920817] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:00.415 18:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3951970 00:20:00.674 18:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:20:00.674 18:34:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:00.674 18:34:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:00.674 18:34:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.674 18:34:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3952861 00:20:00.674 18:34:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:00.674 18:34:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3952861 00:20:00.674 18:34:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3952861 ']' 00:20:00.674 18:34:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.674 18:34:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:00.674 18:34:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.674 18:34:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:00.674 18:34:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.674 [2024-07-15 18:34:46.165478] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:20:00.674 [2024-07-15 18:34:46.165528] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.674 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.933 [2024-07-15 18:34:46.235538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.933 [2024-07-15 18:34:46.312723] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.933 [2024-07-15 18:34:46.312757] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.933 [2024-07-15 18:34:46.312764] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.933 [2024-07-15 18:34:46.312770] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.933 [2024-07-15 18:34:46.312774] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.933 [2024-07-15 18:34:46.312792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.501 18:34:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:01.501 18:34:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:01.501 18:34:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:01.501 18:34:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:01.501 18:34:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.501 18:34:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.501 18:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:20:01.501 18:34:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.501 18:34:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.501 [2024-07-15 18:34:46.999049] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.501 malloc0 00:20:01.501 [2024-07-15 18:34:47.027126] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:01.501 [2024-07-15 18:34:47.027299] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.501 18:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.501 18:34:47 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=3952953 00:20:01.501 18:34:47 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 3952953 /var/tmp/bdevperf.sock 00:20:01.501 18:34:47 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:01.501 18:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3952953 ']' 00:20:01.760 18:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:01.760 18:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:01.760 18:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:01.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:01.760 18:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:01.760 18:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.760 [2024-07-15 18:34:47.100222] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:20:01.760 [2024-07-15 18:34:47.100261] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3952953 ] 00:20:01.760 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.760 [2024-07-15 18:34:47.167393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.760 [2024-07-15 18:34:47.240063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.697 18:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:02.697 18:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:02.697 18:34:47 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kayOMvzTgg 00:20:02.697 18:34:48 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:02.697 [2024-07-15 18:34:48.234246] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:02.955 nvme0n1 00:20:02.955 18:34:48 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:02.955 Running I/O for 1 seconds... 00:20:03.891 00:20:03.891 Latency(us) 00:20:03.891 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.891 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:03.891 Verification LBA range: start 0x0 length 0x2000 00:20:03.891 nvme0n1 : 1.02 5086.49 19.87 0.00 0.00 24974.09 6491.18 38198.13 00:20:03.891 =================================================================================================================== 00:20:03.891 Total : 5086.49 19.87 0.00 0.00 24974.09 6491.18 38198.13 00:20:03.891 0 00:20:04.150 18:34:49 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:20:04.150 18:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.150 18:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.150 18:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.150 18:34:49 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:20:04.150 "subsystems": [ 00:20:04.150 { 00:20:04.150 "subsystem": "keyring", 00:20:04.150 "config": [ 00:20:04.150 { 00:20:04.150 "method": "keyring_file_add_key", 00:20:04.150 "params": { 00:20:04.150 "name": "key0", 00:20:04.150 "path": "/tmp/tmp.kayOMvzTgg" 00:20:04.150 } 00:20:04.150 } 00:20:04.150 ] 00:20:04.150 }, 00:20:04.150 { 00:20:04.150 "subsystem": "iobuf", 00:20:04.150 "config": [ 00:20:04.150 { 00:20:04.150 "method": "iobuf_set_options", 00:20:04.150 "params": { 00:20:04.150 "small_pool_count": 8192, 00:20:04.151 "large_pool_count": 1024, 00:20:04.151 "small_bufsize": 8192, 00:20:04.151 "large_bufsize": 135168 00:20:04.151 } 00:20:04.151 } 00:20:04.151 ] 00:20:04.151 }, 00:20:04.151 { 00:20:04.151 "subsystem": "sock", 00:20:04.151 "config": [ 00:20:04.151 { 00:20:04.151 "method": "sock_set_default_impl", 00:20:04.151 "params": { 00:20:04.151 "impl_name": "posix" 00:20:04.151 } 00:20:04.151 }, 00:20:04.151 { 00:20:04.151 "method": "sock_impl_set_options", 00:20:04.151 "params": { 00:20:04.151 "impl_name": "ssl", 00:20:04.151 "recv_buf_size": 4096, 00:20:04.151 "send_buf_size": 4096, 00:20:04.151 "enable_recv_pipe": true, 00:20:04.151 "enable_quickack": false, 00:20:04.151 "enable_placement_id": 0, 00:20:04.151 "enable_zerocopy_send_server": true, 00:20:04.151 "enable_zerocopy_send_client": false, 00:20:04.151 "zerocopy_threshold": 0, 00:20:04.151 "tls_version": 0, 00:20:04.151 "enable_ktls": false 00:20:04.151 } 00:20:04.151 }, 00:20:04.151 { 00:20:04.151 "method": "sock_impl_set_options", 00:20:04.151 "params": { 00:20:04.151 "impl_name": "posix", 00:20:04.151 "recv_buf_size": 2097152, 00:20:04.151 "send_buf_size": 2097152, 00:20:04.151 "enable_recv_pipe": true, 00:20:04.151 "enable_quickack": false, 00:20:04.151 "enable_placement_id": 0, 00:20:04.151 "enable_zerocopy_send_server": true, 00:20:04.151 "enable_zerocopy_send_client": false, 00:20:04.151 "zerocopy_threshold": 0, 00:20:04.151 "tls_version": 0, 00:20:04.151 "enable_ktls": false 00:20:04.151 } 00:20:04.151 } 00:20:04.151 ] 00:20:04.151 }, 00:20:04.151 { 00:20:04.151 "subsystem": "vmd", 00:20:04.151 "config": [] 00:20:04.151 }, 00:20:04.151 { 00:20:04.151 "subsystem": "accel", 00:20:04.151 "config": [ 00:20:04.151 { 00:20:04.151 "method": "accel_set_options", 00:20:04.151 "params": { 00:20:04.151 "small_cache_size": 128, 00:20:04.151 "large_cache_size": 16, 00:20:04.151 "task_count": 2048, 00:20:04.151 "sequence_count": 2048, 00:20:04.151 "buf_count": 2048 00:20:04.151 } 00:20:04.151 } 00:20:04.151 ] 00:20:04.151 }, 00:20:04.151 { 00:20:04.151 "subsystem": "bdev", 00:20:04.151 "config": [ 00:20:04.151 { 00:20:04.151 "method": "bdev_set_options", 00:20:04.151 "params": { 00:20:04.151 "bdev_io_pool_size": 65535, 00:20:04.151 "bdev_io_cache_size": 256, 00:20:04.151 "bdev_auto_examine": true, 00:20:04.151 "iobuf_small_cache_size": 128, 00:20:04.151 "iobuf_large_cache_size": 16 00:20:04.151 } 00:20:04.151 }, 00:20:04.151 { 00:20:04.151 "method": "bdev_raid_set_options", 00:20:04.151 "params": { 00:20:04.151 "process_window_size_kb": 1024 00:20:04.151 } 00:20:04.151 }, 00:20:04.151 { 00:20:04.151 "method": "bdev_iscsi_set_options", 00:20:04.151 "params": { 00:20:04.151 "timeout_sec": 30 00:20:04.151 } 00:20:04.151 }, 00:20:04.151 { 00:20:04.151 "method": "bdev_nvme_set_options", 00:20:04.151 "params": { 00:20:04.151 "action_on_timeout": "none", 00:20:04.151 "timeout_us": 0, 00:20:04.151 "timeout_admin_us": 0, 00:20:04.151 "keep_alive_timeout_ms": 10000, 00:20:04.151 "arbitration_burst": 0, 00:20:04.151 "low_priority_weight": 0, 00:20:04.151 "medium_priority_weight": 0, 00:20:04.151 "high_priority_weight": 0, 00:20:04.151 "nvme_adminq_poll_period_us": 10000, 00:20:04.151 "nvme_ioq_poll_period_us": 0, 00:20:04.151 "io_queue_requests": 0, 00:20:04.151 "delay_cmd_submit": true, 00:20:04.151 "transport_retry_count": 4, 00:20:04.151 "bdev_retry_count": 3, 00:20:04.151 "transport_ack_timeout": 0, 00:20:04.151 "ctrlr_loss_timeout_sec": 0, 00:20:04.151 "reconnect_delay_sec": 0, 00:20:04.151 "fast_io_fail_timeout_sec": 0, 00:20:04.151 "disable_auto_failback": false, 00:20:04.151 "generate_uuids": false, 00:20:04.151 "transport_tos": 0, 00:20:04.151 "nvme_error_stat": false, 00:20:04.151 "rdma_srq_size": 0, 00:20:04.151 "io_path_stat": false, 00:20:04.151 "allow_accel_sequence": false, 00:20:04.151 "rdma_max_cq_size": 0, 00:20:04.151 "rdma_cm_event_timeout_ms": 0, 00:20:04.151 "dhchap_digests": [ 00:20:04.151 "sha256", 00:20:04.151 "sha384", 00:20:04.151 "sha512" 00:20:04.151 ], 00:20:04.151 "dhchap_dhgroups": [ 00:20:04.151 "null", 00:20:04.151 "ffdhe2048", 00:20:04.151 "ffdhe3072", 00:20:04.151 "ffdhe4096", 00:20:04.151 "ffdhe6144", 00:20:04.151 "ffdhe8192" 00:20:04.151 ] 00:20:04.151 } 00:20:04.151 }, 00:20:04.151 { 00:20:04.151 "method": "bdev_nvme_set_hotplug", 00:20:04.151 "params": { 00:20:04.151 "period_us": 100000, 00:20:04.151 "enable": false 00:20:04.151 } 00:20:04.151 }, 00:20:04.151 { 00:20:04.151 "method": "bdev_malloc_create", 00:20:04.151 "params": { 00:20:04.151 "name": "malloc0", 00:20:04.151 "num_blocks": 8192, 00:20:04.151 "block_size": 4096, 00:20:04.151 "physical_block_size": 4096, 00:20:04.151 "uuid": "cb19068b-ff6b-4a3d-8536-f6069e89c9d8", 00:20:04.151 "optimal_io_boundary": 0 00:20:04.151 } 00:20:04.151 }, 00:20:04.151 { 00:20:04.151 "method": "bdev_wait_for_examine" 00:20:04.151 } 00:20:04.151 ] 00:20:04.151 }, 00:20:04.151 { 00:20:04.151 "subsystem": "nbd", 00:20:04.151 "config": [] 00:20:04.151 }, 00:20:04.151 { 00:20:04.151 "subsystem": "scheduler", 00:20:04.151 "config": [ 00:20:04.151 { 00:20:04.151 "method": "framework_set_scheduler", 00:20:04.151 "params": { 00:20:04.151 "name": "static" 00:20:04.151 } 00:20:04.151 } 00:20:04.151 ] 00:20:04.151 }, 00:20:04.151 { 00:20:04.151 "subsystem": "nvmf", 00:20:04.151 "config": [ 00:20:04.151 { 00:20:04.151 "method": "nvmf_set_config", 00:20:04.151 "params": { 00:20:04.151 "discovery_filter": "match_any", 00:20:04.151 "admin_cmd_passthru": { 00:20:04.151 "identify_ctrlr": false 00:20:04.151 } 00:20:04.151 } 00:20:04.151 }, 00:20:04.151 { 00:20:04.151 "method": "nvmf_set_max_subsystems", 00:20:04.151 "params": { 00:20:04.151 "max_subsystems": 1024 00:20:04.151 } 00:20:04.151 }, 00:20:04.151 { 00:20:04.151 "method": "nvmf_set_crdt", 00:20:04.151 "params": { 00:20:04.151 "crdt1": 0, 00:20:04.151 "crdt2": 0, 00:20:04.151 "crdt3": 0 00:20:04.151 } 00:20:04.151 }, 00:20:04.151 { 00:20:04.151 "method": "nvmf_create_transport", 00:20:04.151 "params": { 00:20:04.151 "trtype": "TCP", 00:20:04.151 "max_queue_depth": 128, 00:20:04.151 "max_io_qpairs_per_ctrlr": 127, 00:20:04.151 "in_capsule_data_size": 4096, 00:20:04.151 "max_io_size": 131072, 00:20:04.151 "io_unit_size": 131072, 00:20:04.151 "max_aq_depth": 128, 00:20:04.151 "num_shared_buffers": 511, 00:20:04.151 "buf_cache_size": 4294967295, 00:20:04.151 "dif_insert_or_strip": false, 00:20:04.151 "zcopy": false, 00:20:04.151 "c2h_success": false, 00:20:04.151 "sock_priority": 0, 00:20:04.151 "abort_timeout_sec": 1, 00:20:04.151 "ack_timeout": 0, 00:20:04.151 "data_wr_pool_size": 0 00:20:04.151 } 00:20:04.151 }, 00:20:04.151 { 00:20:04.151 "method": "nvmf_create_subsystem", 00:20:04.151 "params": { 00:20:04.151 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.151 "allow_any_host": false, 00:20:04.151 "serial_number": "00000000000000000000", 00:20:04.151 "model_number": "SPDK bdev Controller", 00:20:04.151 "max_namespaces": 32, 00:20:04.151 "min_cntlid": 1, 00:20:04.151 "max_cntlid": 65519, 00:20:04.151 "ana_reporting": false 00:20:04.151 } 00:20:04.151 }, 00:20:04.151 { 00:20:04.151 "method": "nvmf_subsystem_add_host", 00:20:04.151 "params": { 00:20:04.151 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.151 "host": "nqn.2016-06.io.spdk:host1", 00:20:04.151 "psk": "key0" 00:20:04.151 } 00:20:04.151 }, 00:20:04.151 { 00:20:04.151 "method": "nvmf_subsystem_add_ns", 00:20:04.151 "params": { 00:20:04.151 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.151 "namespace": { 00:20:04.151 "nsid": 1, 00:20:04.151 "bdev_name": "malloc0", 00:20:04.151 "nguid": "CB19068BFF6B4A3D8536F6069E89C9D8", 00:20:04.151 "uuid": "cb19068b-ff6b-4a3d-8536-f6069e89c9d8", 00:20:04.151 "no_auto_visible": false 00:20:04.151 } 00:20:04.151 } 00:20:04.151 }, 00:20:04.151 { 00:20:04.151 "method": "nvmf_subsystem_add_listener", 00:20:04.151 "params": { 00:20:04.151 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.151 "listen_address": { 00:20:04.151 "trtype": "TCP", 00:20:04.151 "adrfam": "IPv4", 00:20:04.151 "traddr": "10.0.0.2", 00:20:04.151 "trsvcid": "4420" 00:20:04.151 }, 00:20:04.151 "secure_channel": true 00:20:04.151 } 00:20:04.151 } 00:20:04.151 ] 00:20:04.151 } 00:20:04.151 ] 00:20:04.151 }' 00:20:04.151 18:34:49 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:04.411 18:34:49 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:20:04.411 "subsystems": [ 00:20:04.411 { 00:20:04.411 "subsystem": "keyring", 00:20:04.411 "config": [ 00:20:04.411 { 00:20:04.411 "method": "keyring_file_add_key", 00:20:04.411 "params": { 00:20:04.411 "name": "key0", 00:20:04.411 "path": "/tmp/tmp.kayOMvzTgg" 00:20:04.411 } 00:20:04.411 } 00:20:04.411 ] 00:20:04.411 }, 00:20:04.411 { 00:20:04.411 "subsystem": "iobuf", 00:20:04.411 "config": [ 00:20:04.411 { 00:20:04.411 "method": "iobuf_set_options", 00:20:04.411 "params": { 00:20:04.411 "small_pool_count": 8192, 00:20:04.411 "large_pool_count": 1024, 00:20:04.411 "small_bufsize": 8192, 00:20:04.411 "large_bufsize": 135168 00:20:04.411 } 00:20:04.411 } 00:20:04.411 ] 00:20:04.411 }, 00:20:04.411 { 00:20:04.411 "subsystem": "sock", 00:20:04.411 "config": [ 00:20:04.411 { 00:20:04.411 "method": "sock_set_default_impl", 00:20:04.411 "params": { 00:20:04.411 "impl_name": "posix" 00:20:04.411 } 00:20:04.411 }, 00:20:04.411 { 00:20:04.411 "method": "sock_impl_set_options", 00:20:04.411 "params": { 00:20:04.411 "impl_name": "ssl", 00:20:04.411 "recv_buf_size": 4096, 00:20:04.411 "send_buf_size": 4096, 00:20:04.411 "enable_recv_pipe": true, 00:20:04.411 "enable_quickack": false, 00:20:04.411 "enable_placement_id": 0, 00:20:04.411 "enable_zerocopy_send_server": true, 00:20:04.411 "enable_zerocopy_send_client": false, 00:20:04.411 "zerocopy_threshold": 0, 00:20:04.411 "tls_version": 0, 00:20:04.411 "enable_ktls": false 00:20:04.411 } 00:20:04.411 }, 00:20:04.411 { 00:20:04.411 "method": "sock_impl_set_options", 00:20:04.411 "params": { 00:20:04.411 "impl_name": "posix", 00:20:04.411 "recv_buf_size": 2097152, 00:20:04.411 "send_buf_size": 2097152, 00:20:04.411 "enable_recv_pipe": true, 00:20:04.411 "enable_quickack": false, 00:20:04.411 "enable_placement_id": 0, 00:20:04.411 "enable_zerocopy_send_server": true, 00:20:04.411 "enable_zerocopy_send_client": false, 00:20:04.411 "zerocopy_threshold": 0, 00:20:04.411 "tls_version": 0, 00:20:04.411 "enable_ktls": false 00:20:04.411 } 00:20:04.411 } 00:20:04.411 ] 00:20:04.411 }, 00:20:04.411 { 00:20:04.411 "subsystem": "vmd", 00:20:04.411 "config": [] 00:20:04.411 }, 00:20:04.411 { 00:20:04.411 "subsystem": "accel", 00:20:04.411 "config": [ 00:20:04.411 { 00:20:04.411 "method": "accel_set_options", 00:20:04.411 "params": { 00:20:04.411 "small_cache_size": 128, 00:20:04.411 "large_cache_size": 16, 00:20:04.411 "task_count": 2048, 00:20:04.411 "sequence_count": 2048, 00:20:04.411 "buf_count": 2048 00:20:04.411 } 00:20:04.411 } 00:20:04.411 ] 00:20:04.411 }, 00:20:04.411 { 00:20:04.411 "subsystem": "bdev", 00:20:04.411 "config": [ 00:20:04.411 { 00:20:04.411 "method": "bdev_set_options", 00:20:04.411 "params": { 00:20:04.411 "bdev_io_pool_size": 65535, 00:20:04.411 "bdev_io_cache_size": 256, 00:20:04.411 "bdev_auto_examine": true, 00:20:04.411 "iobuf_small_cache_size": 128, 00:20:04.411 "iobuf_large_cache_size": 16 00:20:04.411 } 00:20:04.411 }, 00:20:04.411 { 00:20:04.411 "method": "bdev_raid_set_options", 00:20:04.411 "params": { 00:20:04.411 "process_window_size_kb": 1024 00:20:04.411 } 00:20:04.411 }, 00:20:04.411 { 00:20:04.411 "method": "bdev_iscsi_set_options", 00:20:04.411 "params": { 00:20:04.411 "timeout_sec": 30 00:20:04.411 } 00:20:04.411 }, 00:20:04.411 { 00:20:04.411 "method": "bdev_nvme_set_options", 00:20:04.411 "params": { 00:20:04.411 "action_on_timeout": "none", 00:20:04.411 "timeout_us": 0, 00:20:04.411 "timeout_admin_us": 0, 00:20:04.411 "keep_alive_timeout_ms": 10000, 00:20:04.411 "arbitration_burst": 0, 00:20:04.411 "low_priority_weight": 0, 00:20:04.411 "medium_priority_weight": 0, 00:20:04.411 "high_priority_weight": 0, 00:20:04.411 "nvme_adminq_poll_period_us": 10000, 00:20:04.411 "nvme_ioq_poll_period_us": 0, 00:20:04.411 "io_queue_requests": 512, 00:20:04.411 "delay_cmd_submit": true, 00:20:04.411 "transport_retry_count": 4, 00:20:04.411 "bdev_retry_count": 3, 00:20:04.411 "transport_ack_timeout": 0, 00:20:04.411 "ctrlr_loss_timeout_sec": 0, 00:20:04.411 "reconnect_delay_sec": 0, 00:20:04.411 "fast_io_fail_timeout_sec": 0, 00:20:04.411 "disable_auto_failback": false, 00:20:04.411 "generate_uuids": false, 00:20:04.411 "transport_tos": 0, 00:20:04.411 "nvme_error_stat": false, 00:20:04.411 "rdma_srq_size": 0, 00:20:04.411 "io_path_stat": false, 00:20:04.411 "allow_accel_sequence": false, 00:20:04.411 "rdma_max_cq_size": 0, 00:20:04.411 "rdma_cm_event_timeout_ms": 0, 00:20:04.411 "dhchap_digests": [ 00:20:04.411 "sha256", 00:20:04.411 "sha384", 00:20:04.411 "sha512" 00:20:04.411 ], 00:20:04.411 "dhchap_dhgroups": [ 00:20:04.411 "null", 00:20:04.411 "ffdhe2048", 00:20:04.411 "ffdhe3072", 00:20:04.411 "ffdhe4096", 00:20:04.411 "ffdhe6144", 00:20:04.411 "ffdhe8192" 00:20:04.411 ] 00:20:04.411 } 00:20:04.411 }, 00:20:04.411 { 00:20:04.411 "method": "bdev_nvme_attach_controller", 00:20:04.411 "params": { 00:20:04.411 "name": "nvme0", 00:20:04.411 "trtype": "TCP", 00:20:04.411 "adrfam": "IPv4", 00:20:04.411 "traddr": "10.0.0.2", 00:20:04.411 "trsvcid": "4420", 00:20:04.411 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.411 "prchk_reftag": false, 00:20:04.411 "prchk_guard": false, 00:20:04.411 "ctrlr_loss_timeout_sec": 0, 00:20:04.411 "reconnect_delay_sec": 0, 00:20:04.411 "fast_io_fail_timeout_sec": 0, 00:20:04.411 "psk": "key0", 00:20:04.411 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:04.411 "hdgst": false, 00:20:04.411 "ddgst": false 00:20:04.411 } 00:20:04.411 }, 00:20:04.411 { 00:20:04.411 "method": "bdev_nvme_set_hotplug", 00:20:04.412 "params": { 00:20:04.412 "period_us": 100000, 00:20:04.412 "enable": false 00:20:04.412 } 00:20:04.412 }, 00:20:04.412 { 00:20:04.412 "method": "bdev_enable_histogram", 00:20:04.412 "params": { 00:20:04.412 "name": "nvme0n1", 00:20:04.412 "enable": true 00:20:04.412 } 00:20:04.412 }, 00:20:04.412 { 00:20:04.412 "method": "bdev_wait_for_examine" 00:20:04.412 } 00:20:04.412 ] 00:20:04.412 }, 00:20:04.412 { 00:20:04.412 "subsystem": "nbd", 00:20:04.412 "config": [] 00:20:04.412 } 00:20:04.412 ] 00:20:04.412 }' 00:20:04.412 18:34:49 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 3952953 00:20:04.412 18:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3952953 ']' 00:20:04.412 18:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3952953 00:20:04.412 18:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:04.412 18:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:04.412 18:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3952953 00:20:04.412 18:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:04.412 18:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:04.412 18:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3952953' 00:20:04.412 killing process with pid 3952953 00:20:04.412 18:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3952953 00:20:04.412 Received shutdown signal, test time was about 1.000000 seconds 00:20:04.412 00:20:04.412 Latency(us) 00:20:04.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.412 =================================================================================================================== 00:20:04.412 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:04.412 18:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3952953 00:20:04.671 18:34:50 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 3952861 00:20:04.671 18:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3952861 ']' 00:20:04.671 18:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3952861 00:20:04.671 18:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:04.671 18:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:04.671 18:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3952861 00:20:04.671 18:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:04.671 18:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:04.671 18:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3952861' 00:20:04.671 killing process with pid 3952861 00:20:04.671 18:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3952861 00:20:04.671 18:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3952861 00:20:04.930 18:34:50 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:20:04.930 18:34:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:04.930 18:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:04.930 18:34:50 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:20:04.930 "subsystems": [ 00:20:04.930 { 00:20:04.930 "subsystem": "keyring", 00:20:04.930 "config": [ 00:20:04.930 { 00:20:04.930 "method": "keyring_file_add_key", 00:20:04.930 "params": { 00:20:04.930 "name": "key0", 00:20:04.930 "path": "/tmp/tmp.kayOMvzTgg" 00:20:04.930 } 00:20:04.930 } 00:20:04.930 ] 00:20:04.930 }, 00:20:04.930 { 00:20:04.930 "subsystem": "iobuf", 00:20:04.930 "config": [ 00:20:04.930 { 00:20:04.930 "method": "iobuf_set_options", 00:20:04.930 "params": { 00:20:04.930 "small_pool_count": 8192, 00:20:04.930 "large_pool_count": 1024, 00:20:04.930 "small_bufsize": 8192, 00:20:04.930 "large_bufsize": 135168 00:20:04.930 } 00:20:04.930 } 00:20:04.930 ] 00:20:04.930 }, 00:20:04.930 { 00:20:04.930 "subsystem": "sock", 00:20:04.930 "config": [ 00:20:04.930 { 00:20:04.930 "method": "sock_set_default_impl", 00:20:04.930 "params": { 00:20:04.930 "impl_name": "posix" 00:20:04.930 } 00:20:04.930 }, 00:20:04.930 { 00:20:04.930 "method": "sock_impl_set_options", 00:20:04.930 "params": { 00:20:04.930 "impl_name": "ssl", 00:20:04.930 "recv_buf_size": 4096, 00:20:04.930 "send_buf_size": 4096, 00:20:04.930 "enable_recv_pipe": true, 00:20:04.930 "enable_quickack": false, 00:20:04.930 "enable_placement_id": 0, 00:20:04.930 "enable_zerocopy_send_server": true, 00:20:04.930 "enable_zerocopy_send_client": false, 00:20:04.930 "zerocopy_threshold": 0, 00:20:04.930 "tls_version": 0, 00:20:04.930 "enable_ktls": false 00:20:04.930 } 00:20:04.930 }, 00:20:04.930 { 00:20:04.930 "method": "sock_impl_set_options", 00:20:04.930 "params": { 00:20:04.930 "impl_name": "posix", 00:20:04.930 "recv_buf_size": 2097152, 00:20:04.930 "send_buf_size": 2097152, 00:20:04.930 "enable_recv_pipe": true, 00:20:04.930 "enable_quickack": false, 00:20:04.930 "enable_placement_id": 0, 00:20:04.930 "enable_zerocopy_send_server": true, 00:20:04.930 "enable_zerocopy_send_client": false, 00:20:04.930 "zerocopy_threshold": 0, 00:20:04.930 "tls_version": 0, 00:20:04.930 "enable_ktls": false 00:20:04.930 } 00:20:04.930 } 00:20:04.930 ] 00:20:04.930 }, 00:20:04.930 { 00:20:04.930 "subsystem": "vmd", 00:20:04.930 "config": [] 00:20:04.930 }, 00:20:04.930 { 00:20:04.930 "subsystem": "accel", 00:20:04.930 "config": [ 00:20:04.930 { 00:20:04.930 "method": "accel_set_options", 00:20:04.930 "params": { 00:20:04.930 "small_cache_size": 128, 00:20:04.930 "large_cache_size": 16, 00:20:04.930 "task_count": 2048, 00:20:04.930 "sequence_count": 2048, 00:20:04.930 "buf_count": 2048 00:20:04.930 } 00:20:04.930 } 00:20:04.930 ] 00:20:04.930 }, 00:20:04.930 { 00:20:04.930 "subsystem": "bdev", 00:20:04.930 "config": [ 00:20:04.930 { 00:20:04.930 "method": "bdev_set_options", 00:20:04.930 "params": { 00:20:04.930 "bdev_io_pool_size": 65535, 00:20:04.930 "bdev_io_cache_size": 256, 00:20:04.930 "bdev_auto_examine": true, 00:20:04.930 "iobuf_small_cache_size": 128, 00:20:04.930 "iobuf_large_cache_size": 16 00:20:04.930 } 00:20:04.930 }, 00:20:04.930 { 00:20:04.930 "method": "bdev_raid_set_options", 00:20:04.930 "params": { 00:20:04.930 "process_window_size_kb": 1024 00:20:04.930 } 00:20:04.930 }, 00:20:04.931 { 00:20:04.931 "method": "bdev_iscsi_set_options", 00:20:04.931 "params": { 00:20:04.931 "timeout_sec": 30 00:20:04.931 } 00:20:04.931 }, 00:20:04.931 { 00:20:04.931 "method": "bdev_nvme_set_options", 00:20:04.931 "params": { 00:20:04.931 "action_on_timeout": "none", 00:20:04.931 "timeout_us": 0, 00:20:04.931 "timeout_admin_us": 0, 00:20:04.931 "keep_alive_timeout_ms": 10000, 00:20:04.931 "arbitration_burst": 0, 00:20:04.931 "low_priority_weight": 0, 00:20:04.931 "medium_priority_weight": 0, 00:20:04.931 "high_priority_weight": 0, 00:20:04.931 "nvme_adminq_poll_period_us": 10000, 00:20:04.931 "nvme_ioq_poll_period_us": 0, 00:20:04.931 "io_queue_requests": 0, 00:20:04.931 "delay_cmd_submit": true, 00:20:04.931 "transport_retry_count": 4, 00:20:04.931 "bdev_retry_count": 3, 00:20:04.931 "transport_ack_timeout": 0, 00:20:04.931 "ctrlr_loss_timeout_sec": 0, 00:20:04.931 "reconnect_delay_sec": 0, 00:20:04.931 "fast_io_fail_timeout_sec": 0, 00:20:04.931 "disable_auto_failback": false, 00:20:04.931 "generate_uuids": false, 00:20:04.931 "transport_tos": 0, 00:20:04.931 "nvme_error_stat": false, 00:20:04.931 "rdma_srq_size": 0, 00:20:04.931 "io_path_stat": false, 00:20:04.931 "allow_accel_sequence": false, 00:20:04.931 "rdma_max_cq_size": 0, 00:20:04.931 "rdma_cm_event_timeout_ms": 0, 00:20:04.931 "dhchap_digests": [ 00:20:04.931 "sha256", 00:20:04.931 "sha384", 00:20:04.931 "sha512" 00:20:04.931 ], 00:20:04.931 "dhchap_dhgroups": [ 00:20:04.931 "null", 00:20:04.931 "ffdhe2048", 00:20:04.931 "ffdhe3072", 00:20:04.931 "ffdhe4096", 00:20:04.931 "ffdhe6144", 00:20:04.931 "ffdhe8192" 00:20:04.931 ] 00:20:04.931 } 00:20:04.931 }, 00:20:04.931 { 00:20:04.931 "method": "bdev_nvme_set_hotplug", 00:20:04.931 "params": { 00:20:04.931 "period_us": 100000, 00:20:04.931 "enable": false 00:20:04.931 } 00:20:04.931 }, 00:20:04.931 { 00:20:04.931 "method": "bdev_malloc_create", 00:20:04.931 "params": { 00:20:04.931 "name": "malloc0", 00:20:04.931 "num_blocks": 8192, 00:20:04.931 "block_size": 4096, 00:20:04.931 "physical_block_size": 4096, 00:20:04.931 "uuid": "cb19068b-ff6b-4a3d-8536-f6069e89c9d8", 00:20:04.931 "optimal_io_boundary": 0 00:20:04.931 } 00:20:04.931 }, 00:20:04.931 { 00:20:04.931 "method": "bdev_wait_for_examine" 00:20:04.931 } 00:20:04.931 ] 00:20:04.931 }, 00:20:04.931 { 00:20:04.931 "subsystem": "nbd", 00:20:04.931 "config": [] 00:20:04.931 }, 00:20:04.931 { 00:20:04.931 "subsystem": "scheduler", 00:20:04.931 "config": [ 00:20:04.931 { 00:20:04.931 "method": "framework_set_scheduler", 00:20:04.931 "params": { 00:20:04.931 "name": "static" 00:20:04.931 } 00:20:04.931 } 00:20:04.931 ] 00:20:04.931 }, 00:20:04.931 { 00:20:04.931 "subsystem": "nvmf", 00:20:04.931 "config": [ 00:20:04.931 { 00:20:04.931 "method": "nvmf_set_config", 00:20:04.931 "params": { 00:20:04.931 "discovery_filter": "match_any", 00:20:04.931 "admin_cmd_passthru": { 00:20:04.931 "identify_ctrlr": false 00:20:04.931 } 00:20:04.931 } 00:20:04.931 }, 00:20:04.931 { 00:20:04.931 "method": "nvmf_set_max_subsystems", 00:20:04.931 "params": { 00:20:04.931 "max_subsystems": 1024 00:20:04.931 } 00:20:04.931 }, 00:20:04.931 { 00:20:04.931 "method": "nvmf_set_crdt", 00:20:04.931 "params": { 00:20:04.931 "crdt1": 0, 00:20:04.931 "crdt2": 0, 00:20:04.931 "crdt3": 0 00:20:04.931 } 00:20:04.931 }, 00:20:04.931 { 00:20:04.931 "method": "nvmf_create_transport", 00:20:04.931 "params": { 00:20:04.931 "trtype": "TCP", 00:20:04.931 "max_queue_depth": 128, 00:20:04.931 "max_io_qpairs_per_ctrlr": 127, 00:20:04.931 "in_capsule_data_size": 4096, 00:20:04.931 "max_io_size": 131072, 00:20:04.931 "io_unit_size": 131072, 00:20:04.931 "max_aq_depth": 128, 00:20:04.931 "num_shared_buffers": 511, 00:20:04.931 "buf_cache_size": 4294967295, 00:20:04.931 "dif_insert_or_strip": false, 00:20:04.931 "zcopy": false, 00:20:04.931 "c2h_success": false, 00:20:04.931 "sock_priority": 0, 00:20:04.931 "abort_timeout_sec": 1, 00:20:04.931 "ack_timeout": 0, 00:20:04.931 "data_wr_pool_size": 0 00:20:04.931 } 00:20:04.931 }, 00:20:04.931 { 00:20:04.931 "method": "nvmf_create_subsystem", 00:20:04.931 "params": { 00:20:04.931 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.931 "allow_any_host": false, 00:20:04.931 "serial_number": "00000000000000000000", 00:20:04.931 "model_number": "SPDK bdev Controller", 00:20:04.931 "max_namespaces": 32, 00:20:04.931 "min_cntlid": 1, 00:20:04.931 "max_cntlid": 65519, 00:20:04.931 "ana_reporting": false 00:20:04.931 } 00:20:04.931 }, 00:20:04.931 { 00:20:04.931 "method": "nvmf_subsystem_add_host", 00:20:04.931 "params": { 00:20:04.931 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.931 "host": "nqn.2016-06.io.spdk:host1", 00:20:04.931 "psk": "key0" 00:20:04.931 } 00:20:04.931 }, 00:20:04.931 { 00:20:04.931 "method": "nvmf_subsystem_add_ns", 00:20:04.931 "params": { 00:20:04.931 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.931 "namespace": { 00:20:04.931 "nsid": 1, 00:20:04.931 "bdev_name": "malloc0", 00:20:04.931 "nguid": "CB19068BFF6B4A3D8536F6069E89C9D8", 00:20:04.931 "uuid": "cb19068b-ff6b-4a3d-8536-f6069e89c9d8", 00:20:04.931 "no_auto_visible": false 00:20:04.931 } 00:20:04.931 } 00:20:04.931 }, 00:20:04.931 { 00:20:04.931 "method": "nvmf_subsystem_add_listener", 00:20:04.931 "params": { 00:20:04.931 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.931 "listen_address": { 00:20:04.931 "trtype": "TCP", 00:20:04.931 "adrfam": "IPv4", 00:20:04.931 "traddr": "10.0.0.2", 00:20:04.931 "trsvcid": "4420" 00:20:04.931 }, 00:20:04.931 "secure_channel": true 00:20:04.931 } 00:20:04.931 } 00:20:04.931 ] 00:20:04.931 } 00:20:04.931 ] 00:20:04.931 }' 00:20:04.931 18:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.931 18:34:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3953543 00:20:04.931 18:34:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3953543 00:20:04.931 18:34:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:04.931 18:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3953543 ']' 00:20:04.931 18:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.931 18:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:04.931 18:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.931 18:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:04.931 18:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.931 [2024-07-15 18:34:50.337018] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:20:04.931 [2024-07-15 18:34:50.337062] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.931 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.931 [2024-07-15 18:34:50.407449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.931 [2024-07-15 18:34:50.478982] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.931 [2024-07-15 18:34:50.479023] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.931 [2024-07-15 18:34:50.479030] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.931 [2024-07-15 18:34:50.479036] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.931 [2024-07-15 18:34:50.479042] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.931 [2024-07-15 18:34:50.479092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.191 [2024-07-15 18:34:50.688146] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:05.191 [2024-07-15 18:34:50.720180] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:05.191 [2024-07-15 18:34:50.733631] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:05.758 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:05.758 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:05.758 18:34:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:05.758 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:05.758 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.758 18:34:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:05.758 18:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=3953680 00:20:05.758 18:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 3953680 /var/tmp/bdevperf.sock 00:20:05.758 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3953680 ']' 00:20:05.758 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:05.758 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:05.758 18:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:05.758 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:05.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:05.758 18:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:20:05.758 "subsystems": [ 00:20:05.758 { 00:20:05.758 "subsystem": "keyring", 00:20:05.758 "config": [ 00:20:05.758 { 00:20:05.758 "method": "keyring_file_add_key", 00:20:05.758 "params": { 00:20:05.758 "name": "key0", 00:20:05.758 "path": "/tmp/tmp.kayOMvzTgg" 00:20:05.758 } 00:20:05.758 } 00:20:05.758 ] 00:20:05.758 }, 00:20:05.758 { 00:20:05.758 "subsystem": "iobuf", 00:20:05.758 "config": [ 00:20:05.758 { 00:20:05.758 "method": "iobuf_set_options", 00:20:05.758 "params": { 00:20:05.758 "small_pool_count": 8192, 00:20:05.758 "large_pool_count": 1024, 00:20:05.758 "small_bufsize": 8192, 00:20:05.758 "large_bufsize": 135168 00:20:05.758 } 00:20:05.758 } 00:20:05.758 ] 00:20:05.758 }, 00:20:05.758 { 00:20:05.758 "subsystem": "sock", 00:20:05.758 "config": [ 00:20:05.758 { 00:20:05.758 "method": "sock_set_default_impl", 00:20:05.758 "params": { 00:20:05.758 "impl_name": "posix" 00:20:05.758 } 00:20:05.758 }, 00:20:05.758 { 00:20:05.758 "method": "sock_impl_set_options", 00:20:05.758 "params": { 00:20:05.758 "impl_name": "ssl", 00:20:05.758 "recv_buf_size": 4096, 00:20:05.758 "send_buf_size": 4096, 00:20:05.758 "enable_recv_pipe": true, 00:20:05.758 "enable_quickack": false, 00:20:05.758 "enable_placement_id": 0, 00:20:05.758 "enable_zerocopy_send_server": true, 00:20:05.758 "enable_zerocopy_send_client": false, 00:20:05.758 "zerocopy_threshold": 0, 00:20:05.758 "tls_version": 0, 00:20:05.758 "enable_ktls": false 00:20:05.758 } 00:20:05.758 }, 00:20:05.758 { 00:20:05.758 "method": "sock_impl_set_options", 00:20:05.758 "params": { 00:20:05.758 "impl_name": "posix", 00:20:05.758 "recv_buf_size": 2097152, 00:20:05.758 "send_buf_size": 2097152, 00:20:05.758 "enable_recv_pipe": true, 00:20:05.758 "enable_quickack": false, 00:20:05.758 "enable_placement_id": 0, 00:20:05.758 "enable_zerocopy_send_server": true, 00:20:05.758 "enable_zerocopy_send_client": false, 00:20:05.758 "zerocopy_threshold": 0, 00:20:05.758 "tls_version": 0, 00:20:05.758 "enable_ktls": false 00:20:05.758 } 00:20:05.758 } 00:20:05.758 ] 00:20:05.758 }, 00:20:05.758 { 00:20:05.758 "subsystem": "vmd", 00:20:05.758 "config": [] 00:20:05.758 }, 00:20:05.758 { 00:20:05.758 "subsystem": "accel", 00:20:05.758 "config": [ 00:20:05.758 { 00:20:05.758 "method": "accel_set_options", 00:20:05.758 "params": { 00:20:05.759 "small_cache_size": 128, 00:20:05.759 "large_cache_size": 16, 00:20:05.759 "task_count": 2048, 00:20:05.759 "sequence_count": 2048, 00:20:05.759 "buf_count": 2048 00:20:05.759 } 00:20:05.759 } 00:20:05.759 ] 00:20:05.759 }, 00:20:05.759 { 00:20:05.759 "subsystem": "bdev", 00:20:05.759 "config": [ 00:20:05.759 { 00:20:05.759 "method": "bdev_set_options", 00:20:05.759 "params": { 00:20:05.759 "bdev_io_pool_size": 65535, 00:20:05.759 "bdev_io_cache_size": 256, 00:20:05.759 "bdev_auto_examine": true, 00:20:05.759 "iobuf_small_cache_size": 128, 00:20:05.759 "iobuf_large_cache_size": 16 00:20:05.759 } 00:20:05.759 }, 00:20:05.759 { 00:20:05.759 "method": "bdev_raid_set_options", 00:20:05.759 "params": { 00:20:05.759 "process_window_size_kb": 1024 00:20:05.759 } 00:20:05.759 }, 00:20:05.759 { 00:20:05.759 "method": "bdev_iscsi_set_options", 00:20:05.759 "params": { 00:20:05.759 "timeout_sec": 30 00:20:05.759 } 00:20:05.759 }, 00:20:05.759 { 00:20:05.759 "method": "bdev_nvme_set_options", 00:20:05.759 "params": { 00:20:05.759 "action_on_timeout": "none", 00:20:05.759 "timeout_us": 0, 00:20:05.759 "timeout_admin_us": 0, 00:20:05.759 "keep_alive_timeout_ms": 10000, 00:20:05.759 "arbitration_burst": 0, 00:20:05.759 "low_priority_weight": 0, 00:20:05.759 "medium_priority_weight": 0, 00:20:05.759 "high_priority_weight": 0, 00:20:05.759 "nvme_adminq_poll_period_us": 10000, 00:20:05.759 "nvme_ioq_poll_period_us": 0, 00:20:05.759 "io_queue_requests": 512, 00:20:05.759 "delay_cmd_submit": true, 00:20:05.759 "transport_retry_count": 4, 00:20:05.759 "bdev_retry_count": 3, 00:20:05.759 "transport_ack_timeout": 0, 00:20:05.759 "ctrlr_loss_timeout_sec": 0, 00:20:05.759 "reconnect_delay_sec": 0, 00:20:05.759 "fast_io_fail_timeout_sec": 0, 00:20:05.759 "disable_auto_failback": false, 00:20:05.759 "generate_uuids": false, 00:20:05.759 "transport_tos": 0, 00:20:05.759 "nvme_error_stat": false, 00:20:05.759 "rdma_srq_size": 0, 00:20:05.759 "io_path_stat": false, 00:20:05.759 "allow_accel_sequence": false, 00:20:05.759 "rdma_max_cq_size": 0, 00:20:05.759 "rdma_cm_event_timeout_ms": 0, 00:20:05.759 "dhchap_digests": [ 00:20:05.759 "sha256", 00:20:05.759 "sha384", 00:20:05.759 "sh 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:05.759 a512" 00:20:05.759 ], 00:20:05.759 "dhchap_dhgroups": [ 00:20:05.759 "null", 00:20:05.759 "ffdhe2048", 00:20:05.759 "ffdhe3072", 00:20:05.759 "ffdhe4096", 00:20:05.759 "ffdhe6144", 00:20:05.759 "ffdhe8192" 00:20:05.759 ] 00:20:05.759 } 00:20:05.759 }, 00:20:05.759 { 00:20:05.759 "method": "bdev_nvme_attach_controller", 00:20:05.759 "params": { 00:20:05.759 "name": "nvme0", 00:20:05.759 "trtype": "TCP", 00:20:05.759 "adrfam": "IPv4", 00:20:05.759 "traddr": "10.0.0.2", 00:20:05.759 "trsvcid": "4420", 00:20:05.759 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.759 "prchk_reftag": false, 00:20:05.759 "prchk_guard": false, 00:20:05.759 "ctrlr_loss_timeout_sec": 0, 00:20:05.759 "reconnect_delay_sec": 0, 00:20:05.759 "fast_io_fail_timeout_sec": 0, 00:20:05.759 "psk": "key0", 00:20:05.759 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:05.759 "hdgst": false, 00:20:05.759 "ddgst": false 00:20:05.759 } 00:20:05.759 }, 00:20:05.759 { 00:20:05.759 "method": "bdev_nvme_set_hotplug", 00:20:05.759 "params": { 00:20:05.759 "period_us": 100000, 00:20:05.759 "enable": false 00:20:05.759 } 00:20:05.759 }, 00:20:05.759 { 00:20:05.759 "method": "bdev_enable_histogram", 00:20:05.759 "params": { 00:20:05.759 "name": "nvme0n1", 00:20:05.759 "enable": true 00:20:05.759 } 00:20:05.759 }, 00:20:05.759 { 00:20:05.759 "method": "bdev_wait_for_examine" 00:20:05.759 } 00:20:05.759 ] 00:20:05.759 }, 00:20:05.759 { 00:20:05.759 "subsystem": "nbd", 00:20:05.759 "config": [] 00:20:05.759 } 00:20:05.759 ] 00:20:05.759 }' 00:20:05.759 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.759 [2024-07-15 18:34:51.217946] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:20:05.759 [2024-07-15 18:34:51.217992] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3953680 ] 00:20:05.759 EAL: No free 2048 kB hugepages reported on node 1 00:20:05.759 [2024-07-15 18:34:51.284471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.018 [2024-07-15 18:34:51.358262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:06.018 [2024-07-15 18:34:51.508641] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:06.585 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:06.585 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:06.585 18:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:06.585 18:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:20:06.844 18:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.844 18:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:06.844 Running I/O for 1 seconds... 00:20:07.780 00:20:07.780 Latency(us) 00:20:07.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.780 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:07.780 Verification LBA range: start 0x0 length 0x2000 00:20:07.780 nvme0n1 : 1.01 5616.59 21.94 0.00 0.00 22633.16 5430.13 27962.03 00:20:07.780 =================================================================================================================== 00:20:07.780 Total : 5616.59 21.94 0.00 0.00 22633.16 5430.13 27962.03 00:20:07.780 0 00:20:07.780 18:34:53 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:20:07.780 18:34:53 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:20:07.780 18:34:53 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:07.780 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:20:07.780 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:20:07.780 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:07.780 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:07.780 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:07.780 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:07.780 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:07.780 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:07.780 nvmf_trace.0 00:20:08.039 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:20:08.039 18:34:53 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3953680 00:20:08.039 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3953680 ']' 00:20:08.039 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3953680 00:20:08.039 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:08.039 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:08.039 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3953680 00:20:08.039 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:08.039 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:08.039 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3953680' 00:20:08.039 killing process with pid 3953680 00:20:08.039 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3953680 00:20:08.039 Received shutdown signal, test time was about 1.000000 seconds 00:20:08.039 00:20:08.039 Latency(us) 00:20:08.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.039 =================================================================================================================== 00:20:08.039 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:08.039 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3953680 00:20:08.298 18:34:53 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:08.298 18:34:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:08.298 18:34:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:20:08.298 18:34:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:08.298 18:34:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:20:08.298 18:34:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:08.298 18:34:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:08.298 rmmod nvme_tcp 00:20:08.298 rmmod nvme_fabrics 00:20:08.298 rmmod nvme_keyring 00:20:08.298 18:34:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:08.298 18:34:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:20:08.298 18:34:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:20:08.298 18:34:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3953543 ']' 00:20:08.298 18:34:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3953543 00:20:08.298 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3953543 ']' 00:20:08.298 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3953543 00:20:08.298 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:08.298 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:08.298 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3953543 00:20:08.298 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:08.298 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:08.298 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3953543' 00:20:08.298 killing process with pid 3953543 00:20:08.298 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3953543 00:20:08.298 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3953543 00:20:08.558 18:34:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:08.558 18:34:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:08.558 18:34:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:08.558 18:34:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:08.558 18:34:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:08.558 18:34:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.558 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:08.558 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.462 18:34:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:10.462 18:34:55 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.HxdiaOj16L /tmp/tmp.nd7EeHGzvR /tmp/tmp.kayOMvzTgg 00:20:10.462 00:20:10.462 real 1m25.155s 00:20:10.462 user 2m11.482s 00:20:10.462 sys 0m28.865s 00:20:10.462 18:34:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:10.462 18:34:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.462 ************************************ 00:20:10.462 END TEST nvmf_tls 00:20:10.462 ************************************ 00:20:10.462 18:34:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:10.462 18:34:56 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:10.462 18:34:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:10.462 18:34:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:10.462 18:34:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:10.721 ************************************ 00:20:10.721 START TEST nvmf_fips 00:20:10.721 ************************************ 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:10.721 * Looking for test storage... 00:20:10.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:10.721 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:10.722 18:34:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:20:10.981 Error setting digest 00:20:10.981 00F247FAA07F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:10.981 00F247FAA07F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:20:10.981 18:34:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:17.554 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:17.554 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:17.554 Found net devices under 0000:86:00.0: cvl_0_0 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:17.554 Found net devices under 0000:86:00.1: cvl_0_1 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:17.554 18:35:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:17.554 18:35:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:17.554 18:35:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:17.554 18:35:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:17.554 18:35:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:17.554 18:35:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:17.554 18:35:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:17.554 18:35:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:17.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:17.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:20:17.554 00:20:17.554 --- 10.0.0.2 ping statistics --- 00:20:17.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.554 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:20:17.554 18:35:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:17.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:17.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:20:17.554 00:20:17.554 --- 10.0.0.1 ping statistics --- 00:20:17.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.554 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:20:17.554 18:35:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:17.554 18:35:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:20:17.554 18:35:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:17.554 18:35:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:17.554 18:35:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:17.554 18:35:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:17.554 18:35:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:17.554 18:35:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:17.554 18:35:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:17.554 18:35:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:17.554 18:35:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:17.554 18:35:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:17.554 18:35:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:17.554 18:35:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3957694 00:20:17.554 18:35:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3957694 00:20:17.554 18:35:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:17.554 18:35:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3957694 ']' 00:20:17.554 18:35:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.555 18:35:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:17.555 18:35:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.555 18:35:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:17.555 18:35:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:17.555 [2024-07-15 18:35:02.266685] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:20:17.555 [2024-07-15 18:35:02.266731] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.555 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.555 [2024-07-15 18:35:02.335107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.555 [2024-07-15 18:35:02.410032] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.555 [2024-07-15 18:35:02.410072] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.555 [2024-07-15 18:35:02.410078] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:17.555 [2024-07-15 18:35:02.410083] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:17.555 [2024-07-15 18:35:02.410088] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.555 [2024-07-15 18:35:02.410109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.555 18:35:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:17.555 18:35:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:20:17.555 18:35:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:17.555 18:35:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:17.555 18:35:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:17.555 18:35:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.555 18:35:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:17.555 18:35:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:17.555 18:35:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:17.555 18:35:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:17.555 18:35:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:17.555 18:35:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:17.555 18:35:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:17.555 18:35:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:17.813 [2024-07-15 18:35:03.238924] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.813 [2024-07-15 18:35:03.254946] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:17.813 [2024-07-15 18:35:03.255109] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.813 [2024-07-15 18:35:03.283185] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:17.813 malloc0 00:20:17.813 18:35:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:17.813 18:35:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3957901 00:20:17.813 18:35:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:17.813 18:35:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3957901 /var/tmp/bdevperf.sock 00:20:17.813 18:35:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3957901 ']' 00:20:17.813 18:35:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:17.813 18:35:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:17.813 18:35:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:17.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:17.813 18:35:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:17.813 18:35:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:18.072 [2024-07-15 18:35:03.375189] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:20:18.072 [2024-07-15 18:35:03.375238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3957901 ] 00:20:18.072 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.072 [2024-07-15 18:35:03.441076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.072 [2024-07-15 18:35:03.516957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.639 18:35:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:18.639 18:35:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:20:18.639 18:35:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:18.897 [2024-07-15 18:35:04.310832] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:18.897 [2024-07-15 18:35:04.310901] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:18.897 TLSTESTn1 00:20:18.897 18:35:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:19.155 Running I/O for 10 seconds... 00:20:29.128 00:20:29.128 Latency(us) 00:20:29.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.128 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:29.128 Verification LBA range: start 0x0 length 0x2000 00:20:29.128 TLSTESTn1 : 10.02 5014.86 19.59 0.00 0.00 25486.58 4930.80 35701.52 00:20:29.128 =================================================================================================================== 00:20:29.128 Total : 5014.86 19.59 0.00 0.00 25486.58 4930.80 35701.52 00:20:29.128 0 00:20:29.128 18:35:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:29.128 18:35:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:29.128 18:35:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:20:29.128 18:35:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:20:29.128 18:35:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:29.128 18:35:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:29.128 18:35:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:29.128 18:35:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:29.128 18:35:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:29.128 18:35:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:29.128 nvmf_trace.0 00:20:29.128 18:35:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:20:29.128 18:35:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3957901 00:20:29.128 18:35:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3957901 ']' 00:20:29.128 18:35:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3957901 00:20:29.128 18:35:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:29.128 18:35:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:29.128 18:35:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3957901 00:20:29.128 18:35:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:29.128 18:35:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:29.128 18:35:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3957901' 00:20:29.128 killing process with pid 3957901 00:20:29.128 18:35:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3957901 00:20:29.128 Received shutdown signal, test time was about 10.000000 seconds 00:20:29.128 00:20:29.128 Latency(us) 00:20:29.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.128 =================================================================================================================== 00:20:29.128 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:29.128 [2024-07-15 18:35:14.669669] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:29.128 18:35:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3957901 00:20:29.387 18:35:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:29.387 18:35:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:29.387 18:35:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:20:29.387 18:35:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:29.387 18:35:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:20:29.387 18:35:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:29.387 18:35:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:29.387 rmmod nvme_tcp 00:20:29.387 rmmod nvme_fabrics 00:20:29.387 rmmod nvme_keyring 00:20:29.387 18:35:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:29.387 18:35:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:20:29.387 18:35:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:20:29.387 18:35:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3957694 ']' 00:20:29.387 18:35:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3957694 00:20:29.387 18:35:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3957694 ']' 00:20:29.387 18:35:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3957694 00:20:29.387 18:35:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:29.387 18:35:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:29.387 18:35:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3957694 00:20:29.645 18:35:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:29.645 18:35:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:29.645 18:35:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3957694' 00:20:29.645 killing process with pid 3957694 00:20:29.645 18:35:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3957694 00:20:29.645 [2024-07-15 18:35:14.958042] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:29.645 18:35:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3957694 00:20:29.645 18:35:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:29.645 18:35:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:29.645 18:35:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:29.645 18:35:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:29.645 18:35:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:29.645 18:35:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.645 18:35:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:29.645 18:35:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.201 18:35:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:32.201 18:35:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:32.201 00:20:32.201 real 0m21.163s 00:20:32.201 user 0m22.459s 00:20:32.202 sys 0m9.493s 00:20:32.202 18:35:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:32.202 18:35:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:32.202 ************************************ 00:20:32.202 END TEST nvmf_fips 00:20:32.202 ************************************ 00:20:32.202 18:35:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:32.202 18:35:17 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:20:32.202 18:35:17 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:20:32.202 18:35:17 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:20:32.202 18:35:17 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:20:32.202 18:35:17 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:20:32.202 18:35:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:37.474 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:37.474 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:37.474 Found net devices under 0000:86:00.0: cvl_0_0 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:37.474 Found net devices under 0000:86:00.1: cvl_0_1 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:20:37.474 18:35:22 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:37.474 18:35:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:37.474 18:35:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:37.474 18:35:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:37.474 ************************************ 00:20:37.474 START TEST nvmf_perf_adq 00:20:37.474 ************************************ 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:37.474 * Looking for test storage... 00:20:37.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:37.474 18:35:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:42.767 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:42.767 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:42.767 Found net devices under 0000:86:00.0: cvl_0_0 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:42.767 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.768 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:42.768 Found net devices under 0000:86:00.1: cvl_0_1 00:20:42.768 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.768 18:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:42.768 18:35:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:42.768 18:35:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:42.768 18:35:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:42.768 18:35:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:20:42.768 18:35:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:44.145 18:35:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:47.432 18:35:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:52.704 18:35:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:52.705 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:52.705 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:52.705 Found net devices under 0000:86:00.0: cvl_0_0 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:52.705 Found net devices under 0000:86:00.1: cvl_0_1 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:52.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:52.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:20:52.705 00:20:52.705 --- 10.0.0.2 ping statistics --- 00:20:52.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.705 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:52.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:52.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:20:52.705 00:20:52.705 --- 10.0.0.1 ping statistics --- 00:20:52.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.705 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3967860 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3967860 00:20:52.705 18:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:52.706 18:35:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3967860 ']' 00:20:52.706 18:35:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.706 18:35:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:52.706 18:35:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.706 18:35:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:52.706 18:35:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:52.706 [2024-07-15 18:35:37.898659] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:20:52.706 [2024-07-15 18:35:37.898707] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.706 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.706 [2024-07-15 18:35:37.968623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:52.706 [2024-07-15 18:35:38.048980] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:52.706 [2024-07-15 18:35:38.049016] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:52.706 [2024-07-15 18:35:38.049023] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:52.706 [2024-07-15 18:35:38.049029] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:52.706 [2024-07-15 18:35:38.049033] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:52.706 [2024-07-15 18:35:38.049081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.706 [2024-07-15 18:35:38.049192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.706 [2024-07-15 18:35:38.049295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.706 [2024-07-15 18:35:38.049297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:53.271 18:35:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:53.271 18:35:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:20:53.271 18:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:53.271 18:35:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:53.271 18:35:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:53.271 18:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.271 18:35:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:20:53.271 18:35:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:53.271 18:35:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.271 18:35:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:53.271 18:35:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:53.271 18:35:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.271 18:35:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:53.271 18:35:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:53.271 18:35:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.271 18:35:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:53.271 18:35:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.271 18:35:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:53.271 18:35:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.271 18:35:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:53.528 18:35:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.528 18:35:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:53.528 18:35:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.528 18:35:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:53.528 [2024-07-15 18:35:38.886182] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.528 18:35:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.528 18:35:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:53.528 18:35:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.528 18:35:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:53.528 Malloc1 00:20:53.528 18:35:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.528 18:35:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:53.528 18:35:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.528 18:35:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:53.528 18:35:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.528 18:35:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:53.528 18:35:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.528 18:35:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:53.528 18:35:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.528 18:35:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:53.528 18:35:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.528 18:35:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:53.528 [2024-07-15 18:35:38.933733] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.528 18:35:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.528 18:35:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3967931 00:20:53.528 18:35:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:20:53.528 18:35:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:53.528 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.425 18:35:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:55.425 18:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.425 18:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:55.425 18:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.425 18:35:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:20:55.425 "tick_rate": 2100000000, 00:20:55.425 "poll_groups": [ 00:20:55.425 { 00:20:55.425 "name": "nvmf_tgt_poll_group_000", 00:20:55.425 "admin_qpairs": 1, 00:20:55.425 "io_qpairs": 1, 00:20:55.425 "current_admin_qpairs": 1, 00:20:55.425 "current_io_qpairs": 1, 00:20:55.425 "pending_bdev_io": 0, 00:20:55.425 "completed_nvme_io": 20212, 00:20:55.425 "transports": [ 00:20:55.425 { 00:20:55.425 "trtype": "TCP" 00:20:55.425 } 00:20:55.425 ] 00:20:55.425 }, 00:20:55.425 { 00:20:55.425 "name": "nvmf_tgt_poll_group_001", 00:20:55.425 "admin_qpairs": 0, 00:20:55.425 "io_qpairs": 1, 00:20:55.425 "current_admin_qpairs": 0, 00:20:55.425 "current_io_qpairs": 1, 00:20:55.425 "pending_bdev_io": 0, 00:20:55.425 "completed_nvme_io": 20476, 00:20:55.425 "transports": [ 00:20:55.425 { 00:20:55.425 "trtype": "TCP" 00:20:55.425 } 00:20:55.425 ] 00:20:55.425 }, 00:20:55.425 { 00:20:55.425 "name": "nvmf_tgt_poll_group_002", 00:20:55.425 "admin_qpairs": 0, 00:20:55.425 "io_qpairs": 1, 00:20:55.425 "current_admin_qpairs": 0, 00:20:55.425 "current_io_qpairs": 1, 00:20:55.425 "pending_bdev_io": 0, 00:20:55.425 "completed_nvme_io": 20122, 00:20:55.425 "transports": [ 00:20:55.425 { 00:20:55.425 "trtype": "TCP" 00:20:55.425 } 00:20:55.425 ] 00:20:55.425 }, 00:20:55.425 { 00:20:55.425 "name": "nvmf_tgt_poll_group_003", 00:20:55.425 "admin_qpairs": 0, 00:20:55.425 "io_qpairs": 1, 00:20:55.425 "current_admin_qpairs": 0, 00:20:55.425 "current_io_qpairs": 1, 00:20:55.425 "pending_bdev_io": 0, 00:20:55.425 "completed_nvme_io": 19996, 00:20:55.425 "transports": [ 00:20:55.425 { 00:20:55.425 "trtype": "TCP" 00:20:55.425 } 00:20:55.425 ] 00:20:55.425 } 00:20:55.425 ] 00:20:55.425 }' 00:20:55.425 18:35:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:55.425 18:35:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:20:55.683 18:35:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:20:55.683 18:35:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:20:55.683 18:35:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3967931 00:21:03.788 Initializing NVMe Controllers 00:21:03.788 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:03.788 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:03.788 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:03.788 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:03.788 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:03.788 Initialization complete. Launching workers. 00:21:03.788 ======================================================== 00:21:03.788 Latency(us) 00:21:03.788 Device Information : IOPS MiB/s Average min max 00:21:03.788 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10624.30 41.50 6025.12 2192.58 10884.98 00:21:03.788 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10881.90 42.51 5882.03 2032.71 12979.33 00:21:03.788 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10698.50 41.79 5982.62 1518.90 10624.82 00:21:03.788 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10711.00 41.84 5976.79 1643.95 9961.85 00:21:03.788 ======================================================== 00:21:03.788 Total : 42915.70 167.64 5966.18 1518.90 12979.33 00:21:03.788 00:21:03.788 18:35:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:21:03.788 18:35:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:03.788 18:35:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:03.788 18:35:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:03.788 18:35:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:03.788 18:35:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:03.788 18:35:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:03.788 rmmod nvme_tcp 00:21:03.788 rmmod nvme_fabrics 00:21:03.788 rmmod nvme_keyring 00:21:03.788 18:35:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:03.788 18:35:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:03.788 18:35:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:03.788 18:35:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3967860 ']' 00:21:03.788 18:35:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3967860 00:21:03.788 18:35:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3967860 ']' 00:21:03.788 18:35:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3967860 00:21:03.788 18:35:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:21:03.788 18:35:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:03.788 18:35:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3967860 00:21:03.788 18:35:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:03.788 18:35:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:03.788 18:35:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3967860' 00:21:03.788 killing process with pid 3967860 00:21:03.788 18:35:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3967860 00:21:03.788 18:35:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3967860 00:21:04.046 18:35:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:04.046 18:35:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:04.046 18:35:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:04.046 18:35:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:04.046 18:35:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:04.046 18:35:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.046 18:35:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:04.046 18:35:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.953 18:35:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:05.953 18:35:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:21:05.953 18:35:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:07.328 18:35:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:09.229 18:35:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:14.500 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:14.501 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:14.501 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:14.501 Found net devices under 0000:86:00.0: cvl_0_0 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:14.501 Found net devices under 0000:86:00.1: cvl_0_1 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:14.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:14.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:21:14.501 00:21:14.501 --- 10.0.0.2 ping statistics --- 00:21:14.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.501 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:14.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:14.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:21:14.501 00:21:14.501 --- 10.0.0.1 ping statistics --- 00:21:14.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.501 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:14.501 net.core.busy_poll = 1 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:14.501 net.core.busy_read = 1 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:14.501 18:35:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:14.760 18:36:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:14.760 18:36:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:14.760 18:36:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:14.760 18:36:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:14.760 18:36:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:14.760 18:36:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:14.760 18:36:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:14.760 18:36:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3971714 00:21:14.760 18:36:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3971714 00:21:14.760 18:36:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:14.760 18:36:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3971714 ']' 00:21:14.760 18:36:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.760 18:36:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:14.760 18:36:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.760 18:36:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:14.760 18:36:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:14.760 [2024-07-15 18:36:00.294575] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:21:14.760 [2024-07-15 18:36:00.294617] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:15.019 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.019 [2024-07-15 18:36:00.368565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:15.019 [2024-07-15 18:36:00.453085] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:15.019 [2024-07-15 18:36:00.453121] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:15.019 [2024-07-15 18:36:00.453128] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:15.019 [2024-07-15 18:36:00.453138] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:15.019 [2024-07-15 18:36:00.453143] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:15.019 [2024-07-15 18:36:00.453191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.019 [2024-07-15 18:36:00.453223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:15.019 [2024-07-15 18:36:00.453366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.019 [2024-07-15 18:36:00.453367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:15.585 18:36:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:15.585 18:36:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:21:15.585 18:36:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:15.585 18:36:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:15.585 18:36:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:15.585 18:36:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:15.585 18:36:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:21:15.585 18:36:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:15.585 18:36:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:15.585 18:36:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.585 18:36:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:15.585 18:36:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.844 18:36:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:15.844 18:36:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:15.844 18:36:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.844 18:36:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:15.844 18:36:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.844 18:36:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:15.844 18:36:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.844 18:36:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:15.844 18:36:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.844 18:36:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:15.844 18:36:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.844 18:36:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:15.844 [2024-07-15 18:36:01.272916] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:15.844 18:36:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.845 18:36:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:15.845 18:36:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.845 18:36:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:15.845 Malloc1 00:21:15.845 18:36:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.845 18:36:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:15.845 18:36:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.845 18:36:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:15.845 18:36:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.845 18:36:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:15.845 18:36:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.845 18:36:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:15.845 18:36:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.845 18:36:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:15.845 18:36:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.845 18:36:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:15.845 [2024-07-15 18:36:01.320358] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:15.845 18:36:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.845 18:36:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3972019 00:21:15.845 18:36:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:21:15.845 18:36:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:15.845 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.378 18:36:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:21:18.378 18:36:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.378 18:36:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:18.378 18:36:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.378 18:36:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:21:18.378 "tick_rate": 2100000000, 00:21:18.378 "poll_groups": [ 00:21:18.378 { 00:21:18.378 "name": "nvmf_tgt_poll_group_000", 00:21:18.378 "admin_qpairs": 1, 00:21:18.378 "io_qpairs": 1, 00:21:18.378 "current_admin_qpairs": 1, 00:21:18.378 "current_io_qpairs": 1, 00:21:18.378 "pending_bdev_io": 0, 00:21:18.378 "completed_nvme_io": 30388, 00:21:18.378 "transports": [ 00:21:18.378 { 00:21:18.378 "trtype": "TCP" 00:21:18.378 } 00:21:18.378 ] 00:21:18.378 }, 00:21:18.378 { 00:21:18.378 "name": "nvmf_tgt_poll_group_001", 00:21:18.378 "admin_qpairs": 0, 00:21:18.378 "io_qpairs": 3, 00:21:18.378 "current_admin_qpairs": 0, 00:21:18.378 "current_io_qpairs": 3, 00:21:18.378 "pending_bdev_io": 0, 00:21:18.378 "completed_nvme_io": 30660, 00:21:18.378 "transports": [ 00:21:18.378 { 00:21:18.378 "trtype": "TCP" 00:21:18.378 } 00:21:18.378 ] 00:21:18.378 }, 00:21:18.378 { 00:21:18.378 "name": "nvmf_tgt_poll_group_002", 00:21:18.378 "admin_qpairs": 0, 00:21:18.378 "io_qpairs": 0, 00:21:18.378 "current_admin_qpairs": 0, 00:21:18.378 "current_io_qpairs": 0, 00:21:18.378 "pending_bdev_io": 0, 00:21:18.378 "completed_nvme_io": 0, 00:21:18.378 "transports": [ 00:21:18.378 { 00:21:18.378 "trtype": "TCP" 00:21:18.378 } 00:21:18.378 ] 00:21:18.378 }, 00:21:18.378 { 00:21:18.378 "name": "nvmf_tgt_poll_group_003", 00:21:18.378 "admin_qpairs": 0, 00:21:18.378 "io_qpairs": 0, 00:21:18.378 "current_admin_qpairs": 0, 00:21:18.378 "current_io_qpairs": 0, 00:21:18.378 "pending_bdev_io": 0, 00:21:18.378 "completed_nvme_io": 0, 00:21:18.378 "transports": [ 00:21:18.378 { 00:21:18.378 "trtype": "TCP" 00:21:18.378 } 00:21:18.378 ] 00:21:18.378 } 00:21:18.378 ] 00:21:18.378 }' 00:21:18.378 18:36:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:18.378 18:36:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:21:18.378 18:36:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:21:18.378 18:36:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:21:18.378 18:36:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3972019 00:21:26.490 Initializing NVMe Controllers 00:21:26.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:26.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:26.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:26.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:26.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:26.490 Initialization complete. Launching workers. 00:21:26.490 ======================================================== 00:21:26.490 Latency(us) 00:21:26.490 Device Information : IOPS MiB/s Average min max 00:21:26.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5509.70 21.52 11619.85 1455.80 56444.71 00:21:26.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5450.20 21.29 11787.81 1661.03 58072.50 00:21:26.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5155.40 20.14 12414.46 1390.71 58364.82 00:21:26.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 15959.90 62.34 4009.92 891.92 44737.42 00:21:26.490 ======================================================== 00:21:26.490 Total : 32075.20 125.29 7989.58 891.92 58364.82 00:21:26.490 00:21:26.490 18:36:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:21:26.490 18:36:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:26.490 18:36:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:26.490 18:36:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:26.490 18:36:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:26.490 18:36:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:26.490 18:36:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:26.490 rmmod nvme_tcp 00:21:26.490 rmmod nvme_fabrics 00:21:26.490 rmmod nvme_keyring 00:21:26.490 18:36:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:26.490 18:36:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:26.490 18:36:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:26.490 18:36:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3971714 ']' 00:21:26.490 18:36:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3971714 00:21:26.490 18:36:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3971714 ']' 00:21:26.490 18:36:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3971714 00:21:26.490 18:36:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:21:26.490 18:36:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:26.490 18:36:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3971714 00:21:26.490 18:36:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:26.490 18:36:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:26.490 18:36:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3971714' 00:21:26.490 killing process with pid 3971714 00:21:26.490 18:36:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3971714 00:21:26.490 18:36:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3971714 00:21:26.490 18:36:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:26.490 18:36:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:26.490 18:36:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:26.490 18:36:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:26.490 18:36:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:26.490 18:36:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.490 18:36:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:26.490 18:36:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.439 18:36:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:28.439 18:36:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:21:28.439 00:21:28.439 real 0m51.211s 00:21:28.439 user 2m49.352s 00:21:28.439 sys 0m10.568s 00:21:28.439 18:36:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:28.439 18:36:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:28.439 ************************************ 00:21:28.439 END TEST nvmf_perf_adq 00:21:28.439 ************************************ 00:21:28.439 18:36:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:28.439 18:36:13 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:28.439 18:36:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:28.439 18:36:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:28.439 18:36:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:28.439 ************************************ 00:21:28.439 START TEST nvmf_shutdown 00:21:28.439 ************************************ 00:21:28.439 18:36:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:28.698 * Looking for test storage... 00:21:28.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:28.698 18:36:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:28.698 18:36:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:28.698 18:36:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:28.698 18:36:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:28.698 18:36:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:28.698 18:36:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:28.698 18:36:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:28.698 18:36:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:28.698 18:36:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:28.698 18:36:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:28.698 18:36:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:28.698 18:36:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:28.698 18:36:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:28.698 18:36:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:28.698 18:36:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:28.698 18:36:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:28.698 18:36:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:28.698 18:36:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:28.698 18:36:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:28.698 18:36:14 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:28.698 18:36:14 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:28.698 18:36:14 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:28.698 18:36:14 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.698 18:36:14 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.698 18:36:14 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.698 18:36:14 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:28.698 18:36:14 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.698 18:36:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:21:28.698 18:36:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:28.699 18:36:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:28.699 18:36:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:28.699 18:36:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:28.699 18:36:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:28.699 18:36:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:28.699 18:36:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:28.699 18:36:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:28.699 18:36:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:28.699 18:36:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:28.699 18:36:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:28.699 18:36:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:28.699 18:36:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:28.699 18:36:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:28.699 ************************************ 00:21:28.699 START TEST nvmf_shutdown_tc1 00:21:28.699 ************************************ 00:21:28.699 18:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:21:28.699 18:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:21:28.699 18:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:28.699 18:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:28.699 18:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:28.699 18:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:28.699 18:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:28.699 18:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:28.699 18:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.699 18:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:28.699 18:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.699 18:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:28.699 18:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:28.699 18:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:28.699 18:36:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:35.266 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:35.266 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:35.266 Found net devices under 0000:86:00.0: cvl_0_0 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:35.266 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:35.267 Found net devices under 0000:86:00.1: cvl_0_1 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:35.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:35.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:21:35.267 00:21:35.267 --- 10.0.0.2 ping statistics --- 00:21:35.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.267 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:35.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:35.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:21:35.267 00:21:35.267 --- 10.0.0.1 ping statistics --- 00:21:35.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.267 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3977661 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3977661 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3977661 ']' 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:35.267 18:36:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:35.267 [2024-07-15 18:36:20.018461] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:21:35.267 [2024-07-15 18:36:20.018514] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.267 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.267 [2024-07-15 18:36:20.091673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:35.267 [2024-07-15 18:36:20.168532] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.267 [2024-07-15 18:36:20.168571] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.267 [2024-07-15 18:36:20.168578] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.267 [2024-07-15 18:36:20.168584] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.267 [2024-07-15 18:36:20.168588] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.267 [2024-07-15 18:36:20.168706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.267 [2024-07-15 18:36:20.168813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:35.267 [2024-07-15 18:36:20.168895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.267 [2024-07-15 18:36:20.168896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:35.527 [2024-07-15 18:36:20.871326] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.527 18:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:35.527 Malloc1 00:21:35.527 [2024-07-15 18:36:20.966935] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.527 Malloc2 00:21:35.527 Malloc3 00:21:35.527 Malloc4 00:21:35.786 Malloc5 00:21:35.786 Malloc6 00:21:35.786 Malloc7 00:21:35.786 Malloc8 00:21:35.786 Malloc9 00:21:35.786 Malloc10 00:21:36.046 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.046 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:36.046 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:36.046 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:36.046 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3977940 00:21:36.046 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3977940 /var/tmp/bdevperf.sock 00:21:36.046 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3977940 ']' 00:21:36.046 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:36.046 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:36.046 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:36.046 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:36.046 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:36.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:36.046 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:36.046 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:36.046 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:36.046 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:36.046 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.046 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.046 { 00:21:36.046 "params": { 00:21:36.046 "name": "Nvme$subsystem", 00:21:36.046 "trtype": "$TEST_TRANSPORT", 00:21:36.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.046 "adrfam": "ipv4", 00:21:36.046 "trsvcid": "$NVMF_PORT", 00:21:36.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.046 "hdgst": ${hdgst:-false}, 00:21:36.046 "ddgst": ${ddgst:-false} 00:21:36.046 }, 00:21:36.046 "method": "bdev_nvme_attach_controller" 00:21:36.046 } 00:21:36.046 EOF 00:21:36.046 )") 00:21:36.046 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:36.046 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.046 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.046 { 00:21:36.046 "params": { 00:21:36.046 "name": "Nvme$subsystem", 00:21:36.046 "trtype": "$TEST_TRANSPORT", 00:21:36.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.046 "adrfam": "ipv4", 00:21:36.046 "trsvcid": "$NVMF_PORT", 00:21:36.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.046 "hdgst": ${hdgst:-false}, 00:21:36.046 "ddgst": ${ddgst:-false} 00:21:36.046 }, 00:21:36.046 "method": "bdev_nvme_attach_controller" 00:21:36.046 } 00:21:36.046 EOF 00:21:36.046 )") 00:21:36.046 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:36.046 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.046 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.046 { 00:21:36.046 "params": { 00:21:36.046 "name": "Nvme$subsystem", 00:21:36.046 "trtype": "$TEST_TRANSPORT", 00:21:36.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.047 "adrfam": "ipv4", 00:21:36.047 "trsvcid": "$NVMF_PORT", 00:21:36.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.047 "hdgst": ${hdgst:-false}, 00:21:36.047 "ddgst": ${ddgst:-false} 00:21:36.047 }, 00:21:36.047 "method": "bdev_nvme_attach_controller" 00:21:36.047 } 00:21:36.047 EOF 00:21:36.047 )") 00:21:36.047 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:36.047 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.047 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.047 { 00:21:36.047 "params": { 00:21:36.047 "name": "Nvme$subsystem", 00:21:36.047 "trtype": "$TEST_TRANSPORT", 00:21:36.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.047 "adrfam": "ipv4", 00:21:36.047 "trsvcid": "$NVMF_PORT", 00:21:36.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.047 "hdgst": ${hdgst:-false}, 00:21:36.047 "ddgst": ${ddgst:-false} 00:21:36.047 }, 00:21:36.047 "method": "bdev_nvme_attach_controller" 00:21:36.047 } 00:21:36.047 EOF 00:21:36.047 )") 00:21:36.047 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:36.047 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.047 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.047 { 00:21:36.047 "params": { 00:21:36.047 "name": "Nvme$subsystem", 00:21:36.047 "trtype": "$TEST_TRANSPORT", 00:21:36.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.047 "adrfam": "ipv4", 00:21:36.047 "trsvcid": "$NVMF_PORT", 00:21:36.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.047 "hdgst": ${hdgst:-false}, 00:21:36.047 "ddgst": ${ddgst:-false} 00:21:36.047 }, 00:21:36.047 "method": "bdev_nvme_attach_controller" 00:21:36.047 } 00:21:36.047 EOF 00:21:36.047 )") 00:21:36.047 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:36.047 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.047 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.047 { 00:21:36.047 "params": { 00:21:36.047 "name": "Nvme$subsystem", 00:21:36.047 "trtype": "$TEST_TRANSPORT", 00:21:36.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.047 "adrfam": "ipv4", 00:21:36.047 "trsvcid": "$NVMF_PORT", 00:21:36.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.047 "hdgst": ${hdgst:-false}, 00:21:36.047 "ddgst": ${ddgst:-false} 00:21:36.047 }, 00:21:36.047 "method": "bdev_nvme_attach_controller" 00:21:36.047 } 00:21:36.047 EOF 00:21:36.047 )") 00:21:36.047 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:36.047 [2024-07-15 18:36:21.441286] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:21:36.047 [2024-07-15 18:36:21.441329] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:36.047 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.047 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.047 { 00:21:36.047 "params": { 00:21:36.047 "name": "Nvme$subsystem", 00:21:36.047 "trtype": "$TEST_TRANSPORT", 00:21:36.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.047 "adrfam": "ipv4", 00:21:36.047 "trsvcid": "$NVMF_PORT", 00:21:36.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.047 "hdgst": ${hdgst:-false}, 00:21:36.047 "ddgst": ${ddgst:-false} 00:21:36.047 }, 00:21:36.047 "method": "bdev_nvme_attach_controller" 00:21:36.047 } 00:21:36.047 EOF 00:21:36.047 )") 00:21:36.047 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:36.047 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.047 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.047 { 00:21:36.047 "params": { 00:21:36.047 "name": "Nvme$subsystem", 00:21:36.047 "trtype": "$TEST_TRANSPORT", 00:21:36.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.047 "adrfam": "ipv4", 00:21:36.047 "trsvcid": "$NVMF_PORT", 00:21:36.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.047 "hdgst": ${hdgst:-false}, 00:21:36.047 "ddgst": ${ddgst:-false} 00:21:36.047 }, 00:21:36.047 "method": "bdev_nvme_attach_controller" 00:21:36.047 } 00:21:36.047 EOF 00:21:36.047 )") 00:21:36.047 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:36.047 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.047 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.047 { 00:21:36.047 "params": { 00:21:36.047 "name": "Nvme$subsystem", 00:21:36.047 "trtype": "$TEST_TRANSPORT", 00:21:36.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.047 "adrfam": "ipv4", 00:21:36.047 "trsvcid": "$NVMF_PORT", 00:21:36.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.047 "hdgst": ${hdgst:-false}, 00:21:36.047 "ddgst": ${ddgst:-false} 00:21:36.047 }, 00:21:36.047 "method": "bdev_nvme_attach_controller" 00:21:36.047 } 00:21:36.047 EOF 00:21:36.047 )") 00:21:36.047 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:36.047 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.047 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.047 { 00:21:36.047 "params": { 00:21:36.047 "name": "Nvme$subsystem", 00:21:36.047 "trtype": "$TEST_TRANSPORT", 00:21:36.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.047 "adrfam": "ipv4", 00:21:36.047 "trsvcid": "$NVMF_PORT", 00:21:36.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.047 "hdgst": ${hdgst:-false}, 00:21:36.047 "ddgst": ${ddgst:-false} 00:21:36.047 }, 00:21:36.047 "method": "bdev_nvme_attach_controller" 00:21:36.047 } 00:21:36.047 EOF 00:21:36.047 )") 00:21:36.047 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:36.047 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.047 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:36.047 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:36.047 18:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:36.047 "params": { 00:21:36.047 "name": "Nvme1", 00:21:36.047 "trtype": "tcp", 00:21:36.047 "traddr": "10.0.0.2", 00:21:36.047 "adrfam": "ipv4", 00:21:36.047 "trsvcid": "4420", 00:21:36.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:36.047 "hdgst": false, 00:21:36.047 "ddgst": false 00:21:36.047 }, 00:21:36.047 "method": "bdev_nvme_attach_controller" 00:21:36.047 },{ 00:21:36.047 "params": { 00:21:36.047 "name": "Nvme2", 00:21:36.047 "trtype": "tcp", 00:21:36.047 "traddr": "10.0.0.2", 00:21:36.047 "adrfam": "ipv4", 00:21:36.047 "trsvcid": "4420", 00:21:36.047 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:36.047 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:36.047 "hdgst": false, 00:21:36.047 "ddgst": false 00:21:36.047 }, 00:21:36.047 "method": "bdev_nvme_attach_controller" 00:21:36.047 },{ 00:21:36.047 "params": { 00:21:36.047 "name": "Nvme3", 00:21:36.047 "trtype": "tcp", 00:21:36.047 "traddr": "10.0.0.2", 00:21:36.047 "adrfam": "ipv4", 00:21:36.047 "trsvcid": "4420", 00:21:36.047 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:36.047 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:36.047 "hdgst": false, 00:21:36.047 "ddgst": false 00:21:36.047 }, 00:21:36.047 "method": "bdev_nvme_attach_controller" 00:21:36.047 },{ 00:21:36.047 "params": { 00:21:36.047 "name": "Nvme4", 00:21:36.047 "trtype": "tcp", 00:21:36.047 "traddr": "10.0.0.2", 00:21:36.047 "adrfam": "ipv4", 00:21:36.047 "trsvcid": "4420", 00:21:36.047 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:36.047 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:36.047 "hdgst": false, 00:21:36.047 "ddgst": false 00:21:36.047 }, 00:21:36.047 "method": "bdev_nvme_attach_controller" 00:21:36.047 },{ 00:21:36.047 "params": { 00:21:36.047 "name": "Nvme5", 00:21:36.047 "trtype": "tcp", 00:21:36.047 "traddr": "10.0.0.2", 00:21:36.047 "adrfam": "ipv4", 00:21:36.047 "trsvcid": "4420", 00:21:36.047 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:36.047 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:36.047 "hdgst": false, 00:21:36.047 "ddgst": false 00:21:36.047 }, 00:21:36.047 "method": "bdev_nvme_attach_controller" 00:21:36.047 },{ 00:21:36.047 "params": { 00:21:36.047 "name": "Nvme6", 00:21:36.047 "trtype": "tcp", 00:21:36.047 "traddr": "10.0.0.2", 00:21:36.047 "adrfam": "ipv4", 00:21:36.047 "trsvcid": "4420", 00:21:36.047 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:36.047 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:36.047 "hdgst": false, 00:21:36.047 "ddgst": false 00:21:36.048 }, 00:21:36.048 "method": "bdev_nvme_attach_controller" 00:21:36.048 },{ 00:21:36.048 "params": { 00:21:36.048 "name": "Nvme7", 00:21:36.048 "trtype": "tcp", 00:21:36.048 "traddr": "10.0.0.2", 00:21:36.048 "adrfam": "ipv4", 00:21:36.048 "trsvcid": "4420", 00:21:36.048 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:36.048 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:36.048 "hdgst": false, 00:21:36.048 "ddgst": false 00:21:36.048 }, 00:21:36.048 "method": "bdev_nvme_attach_controller" 00:21:36.048 },{ 00:21:36.048 "params": { 00:21:36.048 "name": "Nvme8", 00:21:36.048 "trtype": "tcp", 00:21:36.048 "traddr": "10.0.0.2", 00:21:36.048 "adrfam": "ipv4", 00:21:36.048 "trsvcid": "4420", 00:21:36.048 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:36.048 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:36.048 "hdgst": false, 00:21:36.048 "ddgst": false 00:21:36.048 }, 00:21:36.048 "method": "bdev_nvme_attach_controller" 00:21:36.048 },{ 00:21:36.048 "params": { 00:21:36.048 "name": "Nvme9", 00:21:36.048 "trtype": "tcp", 00:21:36.048 "traddr": "10.0.0.2", 00:21:36.048 "adrfam": "ipv4", 00:21:36.048 "trsvcid": "4420", 00:21:36.048 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:36.048 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:36.048 "hdgst": false, 00:21:36.048 "ddgst": false 00:21:36.048 }, 00:21:36.048 "method": "bdev_nvme_attach_controller" 00:21:36.048 },{ 00:21:36.048 "params": { 00:21:36.048 "name": "Nvme10", 00:21:36.048 "trtype": "tcp", 00:21:36.048 "traddr": "10.0.0.2", 00:21:36.048 "adrfam": "ipv4", 00:21:36.048 "trsvcid": "4420", 00:21:36.048 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:36.048 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:36.048 "hdgst": false, 00:21:36.048 "ddgst": false 00:21:36.048 }, 00:21:36.048 "method": "bdev_nvme_attach_controller" 00:21:36.048 }' 00:21:36.048 [2024-07-15 18:36:21.511121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.048 [2024-07-15 18:36:21.582864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.424 18:36:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:37.424 18:36:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:37.424 18:36:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:37.424 18:36:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.424 18:36:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:37.424 18:36:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.424 18:36:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3977940 00:21:37.424 18:36:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:37.424 18:36:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:21:38.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3977940 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:38.799 18:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3977661 00:21:38.799 18:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:38.799 18:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:38.799 18:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:38.799 18:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:38.799 18:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:38.799 18:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:38.799 { 00:21:38.799 "params": { 00:21:38.799 "name": "Nvme$subsystem", 00:21:38.799 "trtype": "$TEST_TRANSPORT", 00:21:38.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.799 "adrfam": "ipv4", 00:21:38.799 "trsvcid": "$NVMF_PORT", 00:21:38.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.799 "hdgst": ${hdgst:-false}, 00:21:38.799 "ddgst": ${ddgst:-false} 00:21:38.799 }, 00:21:38.799 "method": "bdev_nvme_attach_controller" 00:21:38.799 } 00:21:38.799 EOF 00:21:38.799 )") 00:21:38.799 18:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:38.799 18:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:38.799 18:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:38.799 { 00:21:38.799 "params": { 00:21:38.799 "name": "Nvme$subsystem", 00:21:38.799 "trtype": "$TEST_TRANSPORT", 00:21:38.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.799 "adrfam": "ipv4", 00:21:38.799 "trsvcid": "$NVMF_PORT", 00:21:38.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.799 "hdgst": ${hdgst:-false}, 00:21:38.799 "ddgst": ${ddgst:-false} 00:21:38.799 }, 00:21:38.799 "method": "bdev_nvme_attach_controller" 00:21:38.799 } 00:21:38.799 EOF 00:21:38.799 )") 00:21:38.799 18:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:38.799 18:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:38.799 18:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:38.799 { 00:21:38.799 "params": { 00:21:38.799 "name": "Nvme$subsystem", 00:21:38.799 "trtype": "$TEST_TRANSPORT", 00:21:38.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.799 "adrfam": "ipv4", 00:21:38.799 "trsvcid": "$NVMF_PORT", 00:21:38.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.799 "hdgst": ${hdgst:-false}, 00:21:38.799 "ddgst": ${ddgst:-false} 00:21:38.799 }, 00:21:38.799 "method": "bdev_nvme_attach_controller" 00:21:38.799 } 00:21:38.799 EOF 00:21:38.799 )") 00:21:38.799 18:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:38.799 18:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:38.799 18:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:38.799 { 00:21:38.799 "params": { 00:21:38.799 "name": "Nvme$subsystem", 00:21:38.799 "trtype": "$TEST_TRANSPORT", 00:21:38.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.799 "adrfam": "ipv4", 00:21:38.799 "trsvcid": "$NVMF_PORT", 00:21:38.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.799 "hdgst": ${hdgst:-false}, 00:21:38.799 "ddgst": ${ddgst:-false} 00:21:38.799 }, 00:21:38.799 "method": "bdev_nvme_attach_controller" 00:21:38.799 } 00:21:38.799 EOF 00:21:38.799 )") 00:21:38.799 18:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:38.799 18:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:38.799 18:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:38.799 { 00:21:38.799 "params": { 00:21:38.799 "name": "Nvme$subsystem", 00:21:38.799 "trtype": "$TEST_TRANSPORT", 00:21:38.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.799 "adrfam": "ipv4", 00:21:38.799 "trsvcid": "$NVMF_PORT", 00:21:38.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.799 "hdgst": ${hdgst:-false}, 00:21:38.799 "ddgst": ${ddgst:-false} 00:21:38.799 }, 00:21:38.799 "method": "bdev_nvme_attach_controller" 00:21:38.799 } 00:21:38.799 EOF 00:21:38.799 )") 00:21:38.799 18:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:38.799 18:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:38.799 18:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:38.799 { 00:21:38.799 "params": { 00:21:38.799 "name": "Nvme$subsystem", 00:21:38.799 "trtype": "$TEST_TRANSPORT", 00:21:38.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.799 "adrfam": "ipv4", 00:21:38.799 "trsvcid": "$NVMF_PORT", 00:21:38.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.799 "hdgst": ${hdgst:-false}, 00:21:38.799 "ddgst": ${ddgst:-false} 00:21:38.799 }, 00:21:38.799 "method": "bdev_nvme_attach_controller" 00:21:38.799 } 00:21:38.799 EOF 00:21:38.799 )") 00:21:38.799 18:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:38.799 18:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:38.799 18:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:38.799 { 00:21:38.799 "params": { 00:21:38.799 "name": "Nvme$subsystem", 00:21:38.799 "trtype": "$TEST_TRANSPORT", 00:21:38.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.799 "adrfam": "ipv4", 00:21:38.799 "trsvcid": "$NVMF_PORT", 00:21:38.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.799 "hdgst": ${hdgst:-false}, 00:21:38.799 "ddgst": ${ddgst:-false} 00:21:38.799 }, 00:21:38.799 "method": "bdev_nvme_attach_controller" 00:21:38.799 } 00:21:38.799 EOF 00:21:38.799 )") 00:21:38.799 18:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:38.800 [2024-07-15 18:36:23.986749] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:21:38.800 [2024-07-15 18:36:23.986795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3978425 ] 00:21:38.800 18:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:38.800 18:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:38.800 { 00:21:38.800 "params": { 00:21:38.800 "name": "Nvme$subsystem", 00:21:38.800 "trtype": "$TEST_TRANSPORT", 00:21:38.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.800 "adrfam": "ipv4", 00:21:38.800 "trsvcid": "$NVMF_PORT", 00:21:38.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.800 "hdgst": ${hdgst:-false}, 00:21:38.800 "ddgst": ${ddgst:-false} 00:21:38.800 }, 00:21:38.800 "method": "bdev_nvme_attach_controller" 00:21:38.800 } 00:21:38.800 EOF 00:21:38.800 )") 00:21:38.800 18:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:38.800 18:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:38.800 18:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:38.800 { 00:21:38.800 "params": { 00:21:38.800 "name": "Nvme$subsystem", 00:21:38.800 "trtype": "$TEST_TRANSPORT", 00:21:38.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.800 "adrfam": "ipv4", 00:21:38.800 "trsvcid": "$NVMF_PORT", 00:21:38.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.800 "hdgst": ${hdgst:-false}, 00:21:38.800 "ddgst": ${ddgst:-false} 00:21:38.800 }, 00:21:38.800 "method": "bdev_nvme_attach_controller" 00:21:38.800 } 00:21:38.800 EOF 00:21:38.800 )") 00:21:38.800 18:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:38.800 18:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:38.800 18:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:38.800 { 00:21:38.800 "params": { 00:21:38.800 "name": "Nvme$subsystem", 00:21:38.800 "trtype": "$TEST_TRANSPORT", 00:21:38.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.800 "adrfam": "ipv4", 00:21:38.800 "trsvcid": "$NVMF_PORT", 00:21:38.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.800 "hdgst": ${hdgst:-false}, 00:21:38.800 "ddgst": ${ddgst:-false} 00:21:38.800 }, 00:21:38.800 "method": "bdev_nvme_attach_controller" 00:21:38.800 } 00:21:38.800 EOF 00:21:38.800 )") 00:21:38.800 18:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:38.800 18:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:38.800 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.800 18:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:38.800 18:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:38.800 "params": { 00:21:38.800 "name": "Nvme1", 00:21:38.800 "trtype": "tcp", 00:21:38.800 "traddr": "10.0.0.2", 00:21:38.800 "adrfam": "ipv4", 00:21:38.800 "trsvcid": "4420", 00:21:38.800 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.800 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:38.800 "hdgst": false, 00:21:38.800 "ddgst": false 00:21:38.800 }, 00:21:38.800 "method": "bdev_nvme_attach_controller" 00:21:38.800 },{ 00:21:38.800 "params": { 00:21:38.800 "name": "Nvme2", 00:21:38.800 "trtype": "tcp", 00:21:38.800 "traddr": "10.0.0.2", 00:21:38.800 "adrfam": "ipv4", 00:21:38.800 "trsvcid": "4420", 00:21:38.800 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:38.800 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:38.800 "hdgst": false, 00:21:38.800 "ddgst": false 00:21:38.800 }, 00:21:38.800 "method": "bdev_nvme_attach_controller" 00:21:38.800 },{ 00:21:38.800 "params": { 00:21:38.800 "name": "Nvme3", 00:21:38.800 "trtype": "tcp", 00:21:38.800 "traddr": "10.0.0.2", 00:21:38.800 "adrfam": "ipv4", 00:21:38.800 "trsvcid": "4420", 00:21:38.800 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:38.800 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:38.800 "hdgst": false, 00:21:38.800 "ddgst": false 00:21:38.800 }, 00:21:38.800 "method": "bdev_nvme_attach_controller" 00:21:38.800 },{ 00:21:38.800 "params": { 00:21:38.800 "name": "Nvme4", 00:21:38.800 "trtype": "tcp", 00:21:38.800 "traddr": "10.0.0.2", 00:21:38.800 "adrfam": "ipv4", 00:21:38.800 "trsvcid": "4420", 00:21:38.800 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:38.800 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:38.800 "hdgst": false, 00:21:38.800 "ddgst": false 00:21:38.800 }, 00:21:38.800 "method": "bdev_nvme_attach_controller" 00:21:38.800 },{ 00:21:38.800 "params": { 00:21:38.800 "name": "Nvme5", 00:21:38.800 "trtype": "tcp", 00:21:38.800 "traddr": "10.0.0.2", 00:21:38.800 "adrfam": "ipv4", 00:21:38.800 "trsvcid": "4420", 00:21:38.800 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:38.800 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:38.800 "hdgst": false, 00:21:38.800 "ddgst": false 00:21:38.800 }, 00:21:38.800 "method": "bdev_nvme_attach_controller" 00:21:38.800 },{ 00:21:38.800 "params": { 00:21:38.800 "name": "Nvme6", 00:21:38.800 "trtype": "tcp", 00:21:38.800 "traddr": "10.0.0.2", 00:21:38.800 "adrfam": "ipv4", 00:21:38.800 "trsvcid": "4420", 00:21:38.800 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:38.800 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:38.800 "hdgst": false, 00:21:38.800 "ddgst": false 00:21:38.800 }, 00:21:38.800 "method": "bdev_nvme_attach_controller" 00:21:38.800 },{ 00:21:38.800 "params": { 00:21:38.800 "name": "Nvme7", 00:21:38.800 "trtype": "tcp", 00:21:38.800 "traddr": "10.0.0.2", 00:21:38.800 "adrfam": "ipv4", 00:21:38.800 "trsvcid": "4420", 00:21:38.800 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:38.800 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:38.800 "hdgst": false, 00:21:38.800 "ddgst": false 00:21:38.800 }, 00:21:38.800 "method": "bdev_nvme_attach_controller" 00:21:38.800 },{ 00:21:38.800 "params": { 00:21:38.800 "name": "Nvme8", 00:21:38.800 "trtype": "tcp", 00:21:38.800 "traddr": "10.0.0.2", 00:21:38.800 "adrfam": "ipv4", 00:21:38.800 "trsvcid": "4420", 00:21:38.800 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:38.800 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:38.800 "hdgst": false, 00:21:38.800 "ddgst": false 00:21:38.800 }, 00:21:38.800 "method": "bdev_nvme_attach_controller" 00:21:38.800 },{ 00:21:38.800 "params": { 00:21:38.800 "name": "Nvme9", 00:21:38.800 "trtype": "tcp", 00:21:38.800 "traddr": "10.0.0.2", 00:21:38.800 "adrfam": "ipv4", 00:21:38.800 "trsvcid": "4420", 00:21:38.800 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:38.800 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:38.800 "hdgst": false, 00:21:38.800 "ddgst": false 00:21:38.800 }, 00:21:38.800 "method": "bdev_nvme_attach_controller" 00:21:38.800 },{ 00:21:38.800 "params": { 00:21:38.800 "name": "Nvme10", 00:21:38.800 "trtype": "tcp", 00:21:38.800 "traddr": "10.0.0.2", 00:21:38.800 "adrfam": "ipv4", 00:21:38.800 "trsvcid": "4420", 00:21:38.800 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:38.800 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:38.800 "hdgst": false, 00:21:38.800 "ddgst": false 00:21:38.800 }, 00:21:38.800 "method": "bdev_nvme_attach_controller" 00:21:38.800 }' 00:21:38.800 [2024-07-15 18:36:24.056584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.800 [2024-07-15 18:36:24.129322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.177 Running I/O for 1 seconds... 00:21:41.113 00:21:41.113 Latency(us) 00:21:41.113 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.113 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.113 Verification LBA range: start 0x0 length 0x400 00:21:41.113 Nvme1n1 : 1.12 289.70 18.11 0.00 0.00 218511.84 3167.57 212711.13 00:21:41.113 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.113 Verification LBA range: start 0x0 length 0x400 00:21:41.113 Nvme2n1 : 1.13 283.33 17.71 0.00 0.00 220870.75 16602.45 223696.21 00:21:41.113 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.113 Verification LBA range: start 0x0 length 0x400 00:21:41.113 Nvme3n1 : 1.08 296.70 18.54 0.00 0.00 206546.65 12857.54 206719.27 00:21:41.113 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.113 Verification LBA range: start 0x0 length 0x400 00:21:41.113 Nvme4n1 : 1.11 292.73 18.30 0.00 0.00 206788.26 6272.73 200727.41 00:21:41.113 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.113 Verification LBA range: start 0x0 length 0x400 00:21:41.113 Nvme5n1 : 1.14 281.42 17.59 0.00 0.00 212600.34 18350.08 214708.42 00:21:41.113 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.113 Verification LBA range: start 0x0 length 0x400 00:21:41.113 Nvme6n1 : 1.14 281.72 17.61 0.00 0.00 209817.40 16602.45 212711.13 00:21:41.113 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.113 Verification LBA range: start 0x0 length 0x400 00:21:41.113 Nvme7n1 : 1.11 292.52 18.28 0.00 0.00 198661.54 15978.30 212711.13 00:21:41.113 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.113 Verification LBA range: start 0x0 length 0x400 00:21:41.113 Nvme8n1 : 1.13 284.15 17.76 0.00 0.00 201820.26 14293.09 203723.34 00:21:41.113 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.113 Verification LBA range: start 0x0 length 0x400 00:21:41.113 Nvme9n1 : 1.14 280.99 17.56 0.00 0.00 201216.00 19723.22 221698.93 00:21:41.113 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.113 Verification LBA range: start 0x0 length 0x400 00:21:41.113 Nvme10n1 : 1.14 279.96 17.50 0.00 0.00 199093.49 18350.08 234681.30 00:21:41.113 =================================================================================================================== 00:21:41.114 Total : 2863.23 178.95 0.00 0.00 207597.27 3167.57 234681.30 00:21:41.373 18:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:21:41.373 18:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:41.373 18:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:41.373 18:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:41.373 18:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:41.373 18:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:41.373 18:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:21:41.373 18:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:41.373 18:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:21:41.373 18:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:41.373 18:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:41.373 rmmod nvme_tcp 00:21:41.373 rmmod nvme_fabrics 00:21:41.373 rmmod nvme_keyring 00:21:41.373 18:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:41.373 18:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:21:41.373 18:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:21:41.373 18:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3977661 ']' 00:21:41.373 18:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3977661 00:21:41.373 18:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 3977661 ']' 00:21:41.373 18:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 3977661 00:21:41.373 18:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:21:41.373 18:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:41.373 18:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3977661 00:21:41.632 18:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:41.632 18:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:41.632 18:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3977661' 00:21:41.632 killing process with pid 3977661 00:21:41.632 18:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 3977661 00:21:41.632 18:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 3977661 00:21:41.890 18:36:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:41.890 18:36:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:41.890 18:36:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:41.890 18:36:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:41.890 18:36:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:41.890 18:36:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.890 18:36:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:41.890 18:36:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.422 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:44.422 00:21:44.422 real 0m15.240s 00:21:44.422 user 0m33.920s 00:21:44.422 sys 0m5.733s 00:21:44.422 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:44.423 ************************************ 00:21:44.423 END TEST nvmf_shutdown_tc1 00:21:44.423 ************************************ 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:44.423 ************************************ 00:21:44.423 START TEST nvmf_shutdown_tc2 00:21:44.423 ************************************ 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:44.423 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:44.423 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:44.423 Found net devices under 0000:86:00.0: cvl_0_0 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:44.423 Found net devices under 0000:86:00.1: cvl_0_1 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:44.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:44.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:21:44.423 00:21:44.423 --- 10.0.0.2 ping statistics --- 00:21:44.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.423 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:44.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:44.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:21:44.423 00:21:44.423 --- 10.0.0.1 ping statistics --- 00:21:44.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.423 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3979448 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3979448 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3979448 ']' 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:44.423 18:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:44.423 [2024-07-15 18:36:29.825810] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:21:44.423 [2024-07-15 18:36:29.825851] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.423 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.423 [2024-07-15 18:36:29.893999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:44.423 [2024-07-15 18:36:29.964443] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:44.423 [2024-07-15 18:36:29.964485] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:44.423 [2024-07-15 18:36:29.964492] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:44.423 [2024-07-15 18:36:29.964497] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:44.423 [2024-07-15 18:36:29.964502] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:44.423 [2024-07-15 18:36:29.964630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:44.423 [2024-07-15 18:36:29.964764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:44.423 [2024-07-15 18:36:29.964846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.423 [2024-07-15 18:36:29.964847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:45.356 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:45.356 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:45.357 [2024-07-15 18:36:30.665219] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.357 18:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:45.357 Malloc1 00:21:45.357 [2024-07-15 18:36:30.760677] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:45.357 Malloc2 00:21:45.357 Malloc3 00:21:45.357 Malloc4 00:21:45.357 Malloc5 00:21:45.615 Malloc6 00:21:45.615 Malloc7 00:21:45.615 Malloc8 00:21:45.615 Malloc9 00:21:45.615 Malloc10 00:21:45.615 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.615 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:45.615 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:45.615 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:45.874 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3979725 00:21:45.874 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3979725 /var/tmp/bdevperf.sock 00:21:45.874 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3979725 ']' 00:21:45.874 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:45.874 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:45.874 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:45.874 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:45.874 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:45.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:45.874 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:21:45.874 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:45.874 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:21:45.874 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:45.874 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:45.874 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:45.874 { 00:21:45.874 "params": { 00:21:45.874 "name": "Nvme$subsystem", 00:21:45.874 "trtype": "$TEST_TRANSPORT", 00:21:45.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.874 "adrfam": "ipv4", 00:21:45.874 "trsvcid": "$NVMF_PORT", 00:21:45.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.874 "hdgst": ${hdgst:-false}, 00:21:45.874 "ddgst": ${ddgst:-false} 00:21:45.874 }, 00:21:45.874 "method": "bdev_nvme_attach_controller" 00:21:45.874 } 00:21:45.874 EOF 00:21:45.874 )") 00:21:45.874 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:45.874 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:45.874 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:45.874 { 00:21:45.874 "params": { 00:21:45.874 "name": "Nvme$subsystem", 00:21:45.874 "trtype": "$TEST_TRANSPORT", 00:21:45.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.874 "adrfam": "ipv4", 00:21:45.874 "trsvcid": "$NVMF_PORT", 00:21:45.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.874 "hdgst": ${hdgst:-false}, 00:21:45.874 "ddgst": ${ddgst:-false} 00:21:45.874 }, 00:21:45.874 "method": "bdev_nvme_attach_controller" 00:21:45.874 } 00:21:45.874 EOF 00:21:45.874 )") 00:21:45.874 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:45.874 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:45.875 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:45.875 { 00:21:45.875 "params": { 00:21:45.875 "name": "Nvme$subsystem", 00:21:45.875 "trtype": "$TEST_TRANSPORT", 00:21:45.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.875 "adrfam": "ipv4", 00:21:45.875 "trsvcid": "$NVMF_PORT", 00:21:45.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.875 "hdgst": ${hdgst:-false}, 00:21:45.875 "ddgst": ${ddgst:-false} 00:21:45.875 }, 00:21:45.875 "method": "bdev_nvme_attach_controller" 00:21:45.875 } 00:21:45.875 EOF 00:21:45.875 )") 00:21:45.875 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:45.875 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:45.875 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:45.875 { 00:21:45.875 "params": { 00:21:45.875 "name": "Nvme$subsystem", 00:21:45.875 "trtype": "$TEST_TRANSPORT", 00:21:45.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.875 "adrfam": "ipv4", 00:21:45.875 "trsvcid": "$NVMF_PORT", 00:21:45.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.875 "hdgst": ${hdgst:-false}, 00:21:45.875 "ddgst": ${ddgst:-false} 00:21:45.875 }, 00:21:45.875 "method": "bdev_nvme_attach_controller" 00:21:45.875 } 00:21:45.875 EOF 00:21:45.875 )") 00:21:45.875 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:45.875 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:45.875 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:45.875 { 00:21:45.875 "params": { 00:21:45.875 "name": "Nvme$subsystem", 00:21:45.875 "trtype": "$TEST_TRANSPORT", 00:21:45.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.875 "adrfam": "ipv4", 00:21:45.875 "trsvcid": "$NVMF_PORT", 00:21:45.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.875 "hdgst": ${hdgst:-false}, 00:21:45.875 "ddgst": ${ddgst:-false} 00:21:45.875 }, 00:21:45.875 "method": "bdev_nvme_attach_controller" 00:21:45.875 } 00:21:45.875 EOF 00:21:45.875 )") 00:21:45.875 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:45.875 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:45.875 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:45.875 { 00:21:45.875 "params": { 00:21:45.875 "name": "Nvme$subsystem", 00:21:45.875 "trtype": "$TEST_TRANSPORT", 00:21:45.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.875 "adrfam": "ipv4", 00:21:45.875 "trsvcid": "$NVMF_PORT", 00:21:45.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.875 "hdgst": ${hdgst:-false}, 00:21:45.875 "ddgst": ${ddgst:-false} 00:21:45.875 }, 00:21:45.875 "method": "bdev_nvme_attach_controller" 00:21:45.875 } 00:21:45.875 EOF 00:21:45.875 )") 00:21:45.875 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:45.875 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:45.875 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:45.875 { 00:21:45.875 "params": { 00:21:45.875 "name": "Nvme$subsystem", 00:21:45.875 "trtype": "$TEST_TRANSPORT", 00:21:45.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.875 "adrfam": "ipv4", 00:21:45.875 "trsvcid": "$NVMF_PORT", 00:21:45.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.875 "hdgst": ${hdgst:-false}, 00:21:45.875 "ddgst": ${ddgst:-false} 00:21:45.875 }, 00:21:45.875 "method": "bdev_nvme_attach_controller" 00:21:45.875 } 00:21:45.875 EOF 00:21:45.875 )") 00:21:45.875 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:45.875 [2024-07-15 18:36:31.226856] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:21:45.875 [2024-07-15 18:36:31.226907] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3979725 ] 00:21:45.875 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:45.875 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:45.875 { 00:21:45.875 "params": { 00:21:45.875 "name": "Nvme$subsystem", 00:21:45.875 "trtype": "$TEST_TRANSPORT", 00:21:45.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.875 "adrfam": "ipv4", 00:21:45.875 "trsvcid": "$NVMF_PORT", 00:21:45.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.875 "hdgst": ${hdgst:-false}, 00:21:45.875 "ddgst": ${ddgst:-false} 00:21:45.875 }, 00:21:45.875 "method": "bdev_nvme_attach_controller" 00:21:45.875 } 00:21:45.875 EOF 00:21:45.875 )") 00:21:45.875 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:45.875 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:45.875 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:45.875 { 00:21:45.875 "params": { 00:21:45.875 "name": "Nvme$subsystem", 00:21:45.875 "trtype": "$TEST_TRANSPORT", 00:21:45.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.875 "adrfam": "ipv4", 00:21:45.875 "trsvcid": "$NVMF_PORT", 00:21:45.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.875 "hdgst": ${hdgst:-false}, 00:21:45.875 "ddgst": ${ddgst:-false} 00:21:45.875 }, 00:21:45.875 "method": "bdev_nvme_attach_controller" 00:21:45.875 } 00:21:45.875 EOF 00:21:45.875 )") 00:21:45.875 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:45.875 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:45.875 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:45.875 { 00:21:45.875 "params": { 00:21:45.875 "name": "Nvme$subsystem", 00:21:45.875 "trtype": "$TEST_TRANSPORT", 00:21:45.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.875 "adrfam": "ipv4", 00:21:45.875 "trsvcid": "$NVMF_PORT", 00:21:45.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.875 "hdgst": ${hdgst:-false}, 00:21:45.875 "ddgst": ${ddgst:-false} 00:21:45.875 }, 00:21:45.875 "method": "bdev_nvme_attach_controller" 00:21:45.875 } 00:21:45.875 EOF 00:21:45.875 )") 00:21:45.875 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:45.875 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:21:45.875 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.875 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:21:45.875 18:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:45.875 "params": { 00:21:45.875 "name": "Nvme1", 00:21:45.875 "trtype": "tcp", 00:21:45.875 "traddr": "10.0.0.2", 00:21:45.875 "adrfam": "ipv4", 00:21:45.875 "trsvcid": "4420", 00:21:45.875 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.875 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:45.875 "hdgst": false, 00:21:45.875 "ddgst": false 00:21:45.875 }, 00:21:45.875 "method": "bdev_nvme_attach_controller" 00:21:45.875 },{ 00:21:45.875 "params": { 00:21:45.875 "name": "Nvme2", 00:21:45.875 "trtype": "tcp", 00:21:45.875 "traddr": "10.0.0.2", 00:21:45.875 "adrfam": "ipv4", 00:21:45.875 "trsvcid": "4420", 00:21:45.875 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:45.875 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:45.875 "hdgst": false, 00:21:45.875 "ddgst": false 00:21:45.875 }, 00:21:45.875 "method": "bdev_nvme_attach_controller" 00:21:45.875 },{ 00:21:45.875 "params": { 00:21:45.875 "name": "Nvme3", 00:21:45.875 "trtype": "tcp", 00:21:45.875 "traddr": "10.0.0.2", 00:21:45.875 "adrfam": "ipv4", 00:21:45.875 "trsvcid": "4420", 00:21:45.875 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:45.875 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:45.875 "hdgst": false, 00:21:45.875 "ddgst": false 00:21:45.875 }, 00:21:45.875 "method": "bdev_nvme_attach_controller" 00:21:45.875 },{ 00:21:45.875 "params": { 00:21:45.875 "name": "Nvme4", 00:21:45.875 "trtype": "tcp", 00:21:45.875 "traddr": "10.0.0.2", 00:21:45.875 "adrfam": "ipv4", 00:21:45.875 "trsvcid": "4420", 00:21:45.875 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:45.875 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:45.875 "hdgst": false, 00:21:45.875 "ddgst": false 00:21:45.875 }, 00:21:45.875 "method": "bdev_nvme_attach_controller" 00:21:45.875 },{ 00:21:45.875 "params": { 00:21:45.875 "name": "Nvme5", 00:21:45.875 "trtype": "tcp", 00:21:45.875 "traddr": "10.0.0.2", 00:21:45.875 "adrfam": "ipv4", 00:21:45.875 "trsvcid": "4420", 00:21:45.875 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:45.875 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:45.875 "hdgst": false, 00:21:45.875 "ddgst": false 00:21:45.875 }, 00:21:45.875 "method": "bdev_nvme_attach_controller" 00:21:45.875 },{ 00:21:45.875 "params": { 00:21:45.875 "name": "Nvme6", 00:21:45.875 "trtype": "tcp", 00:21:45.875 "traddr": "10.0.0.2", 00:21:45.875 "adrfam": "ipv4", 00:21:45.875 "trsvcid": "4420", 00:21:45.875 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:45.875 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:45.875 "hdgst": false, 00:21:45.875 "ddgst": false 00:21:45.875 }, 00:21:45.875 "method": "bdev_nvme_attach_controller" 00:21:45.875 },{ 00:21:45.875 "params": { 00:21:45.875 "name": "Nvme7", 00:21:45.876 "trtype": "tcp", 00:21:45.876 "traddr": "10.0.0.2", 00:21:45.876 "adrfam": "ipv4", 00:21:45.876 "trsvcid": "4420", 00:21:45.876 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:45.876 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:45.876 "hdgst": false, 00:21:45.876 "ddgst": false 00:21:45.876 }, 00:21:45.876 "method": "bdev_nvme_attach_controller" 00:21:45.876 },{ 00:21:45.876 "params": { 00:21:45.876 "name": "Nvme8", 00:21:45.876 "trtype": "tcp", 00:21:45.876 "traddr": "10.0.0.2", 00:21:45.876 "adrfam": "ipv4", 00:21:45.876 "trsvcid": "4420", 00:21:45.876 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:45.876 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:45.876 "hdgst": false, 00:21:45.876 "ddgst": false 00:21:45.876 }, 00:21:45.876 "method": "bdev_nvme_attach_controller" 00:21:45.876 },{ 00:21:45.876 "params": { 00:21:45.876 "name": "Nvme9", 00:21:45.876 "trtype": "tcp", 00:21:45.876 "traddr": "10.0.0.2", 00:21:45.876 "adrfam": "ipv4", 00:21:45.876 "trsvcid": "4420", 00:21:45.876 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:45.876 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:45.876 "hdgst": false, 00:21:45.876 "ddgst": false 00:21:45.876 }, 00:21:45.876 "method": "bdev_nvme_attach_controller" 00:21:45.876 },{ 00:21:45.876 "params": { 00:21:45.876 "name": "Nvme10", 00:21:45.876 "trtype": "tcp", 00:21:45.876 "traddr": "10.0.0.2", 00:21:45.876 "adrfam": "ipv4", 00:21:45.876 "trsvcid": "4420", 00:21:45.876 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:45.876 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:45.876 "hdgst": false, 00:21:45.876 "ddgst": false 00:21:45.876 }, 00:21:45.876 "method": "bdev_nvme_attach_controller" 00:21:45.876 }' 00:21:45.876 [2024-07-15 18:36:31.297342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.876 [2024-07-15 18:36:31.368999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.248 Running I/O for 10 seconds... 00:21:47.248 18:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:47.248 18:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:47.248 18:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:47.248 18:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.248 18:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:47.248 18:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.248 18:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:47.248 18:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:47.248 18:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:47.248 18:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:21:47.248 18:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:21:47.248 18:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:47.248 18:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:47.248 18:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:47.248 18:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:47.248 18:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.248 18:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:47.248 18:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.506 18:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:47.506 18:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:47.506 18:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:47.765 18:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:47.765 18:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:47.765 18:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:47.765 18:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:47.765 18:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.765 18:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:47.765 18:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.765 18:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:21:47.765 18:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:21:47.765 18:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:21:47.765 18:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:21:47.765 18:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:21:47.765 18:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3979725 00:21:47.765 18:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3979725 ']' 00:21:47.765 18:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3979725 00:21:47.765 18:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:21:47.765 18:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:47.765 18:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3979725 00:21:47.765 18:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:47.765 18:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:47.765 18:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3979725' 00:21:47.765 killing process with pid 3979725 00:21:47.765 18:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3979725 00:21:47.765 18:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3979725 00:21:47.765 Received shutdown signal, test time was about 0.602769 seconds 00:21:47.765 00:21:47.765 Latency(us) 00:21:47.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.765 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:47.765 Verification LBA range: start 0x0 length 0x400 00:21:47.765 Nvme1n1 : 0.59 326.94 20.43 0.00 0.00 192586.77 16602.45 203723.34 00:21:47.765 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:47.765 Verification LBA range: start 0x0 length 0x400 00:21:47.765 Nvme2n1 : 0.57 224.53 14.03 0.00 0.00 273025.22 16976.94 196732.83 00:21:47.765 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:47.765 Verification LBA range: start 0x0 length 0x400 00:21:47.765 Nvme3n1 : 0.59 325.07 20.32 0.00 0.00 183502.91 14230.67 198730.12 00:21:47.765 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:47.765 Verification LBA range: start 0x0 length 0x400 00:21:47.765 Nvme4n1 : 0.59 323.19 20.20 0.00 0.00 179591.56 14355.50 192738.26 00:21:47.765 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:47.765 Verification LBA range: start 0x0 length 0x400 00:21:47.765 Nvme5n1 : 0.60 321.61 20.10 0.00 0.00 175546.11 18724.57 213709.78 00:21:47.765 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:47.765 Verification LBA range: start 0x0 length 0x400 00:21:47.765 Nvme6n1 : 0.60 320.04 20.00 0.00 0.00 170708.60 15978.30 206719.27 00:21:47.765 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:47.765 Verification LBA range: start 0x0 length 0x400 00:21:47.765 Nvme7n1 : 0.60 318.87 19.93 0.00 0.00 167037.64 13981.01 202724.69 00:21:47.765 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:47.765 Verification LBA range: start 0x0 length 0x400 00:21:47.765 Nvme8n1 : 0.57 241.76 15.11 0.00 0.00 207080.74 7365.00 189742.32 00:21:47.765 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:47.765 Verification LBA range: start 0x0 length 0x400 00:21:47.765 Nvme9n1 : 0.58 220.87 13.80 0.00 0.00 224029.74 31207.62 214708.42 00:21:47.765 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:47.765 Verification LBA range: start 0x0 length 0x400 00:21:47.765 Nvme10n1 : 0.58 220.25 13.77 0.00 0.00 217112.38 16727.28 227690.79 00:21:47.765 =================================================================================================================== 00:21:47.765 Total : 2843.13 177.70 0.00 0.00 194292.88 7365.00 227690.79 00:21:48.023 18:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:21:48.958 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3979448 00:21:48.958 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:21:48.958 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:48.958 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:48.958 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:48.958 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:48.958 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:48.958 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:21:48.958 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:48.958 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:21:48.958 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:48.958 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:48.958 rmmod nvme_tcp 00:21:48.958 rmmod nvme_fabrics 00:21:48.958 rmmod nvme_keyring 00:21:48.958 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:48.958 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:21:48.958 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:21:48.958 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3979448 ']' 00:21:48.958 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3979448 00:21:48.958 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3979448 ']' 00:21:48.958 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3979448 00:21:48.958 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:21:48.958 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:48.958 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3979448 00:21:49.216 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:49.216 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:49.216 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3979448' 00:21:49.216 killing process with pid 3979448 00:21:49.216 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3979448 00:21:49.216 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3979448 00:21:49.474 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:49.474 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:49.474 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:49.474 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:49.475 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:49.475 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.475 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:49.475 18:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.026 18:36:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:52.026 00:21:52.026 real 0m7.523s 00:21:52.026 user 0m21.966s 00:21:52.026 sys 0m1.198s 00:21:52.026 18:36:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:52.026 18:36:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:52.026 ************************************ 00:21:52.026 END TEST nvmf_shutdown_tc2 00:21:52.026 ************************************ 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:52.026 ************************************ 00:21:52.026 START TEST nvmf_shutdown_tc3 00:21:52.026 ************************************ 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:52.026 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:52.027 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:52.027 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:52.027 Found net devices under 0000:86:00.0: cvl_0_0 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:52.027 Found net devices under 0000:86:00.1: cvl_0_1 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:52.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:52.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:21:52.027 00:21:52.027 --- 10.0.0.2 ping statistics --- 00:21:52.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.027 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:52.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:52.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:21:52.027 00:21:52.027 --- 10.0.0.1 ping statistics --- 00:21:52.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.027 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3980763 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3980763 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3980763 ']' 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:52.027 18:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:52.027 [2024-07-15 18:36:37.435438] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:21:52.027 [2024-07-15 18:36:37.435486] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.027 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.027 [2024-07-15 18:36:37.505822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:52.286 [2024-07-15 18:36:37.582860] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.286 [2024-07-15 18:36:37.582897] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.286 [2024-07-15 18:36:37.582904] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:52.286 [2024-07-15 18:36:37.582910] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:52.286 [2024-07-15 18:36:37.582915] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.286 [2024-07-15 18:36:37.583028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.286 [2024-07-15 18:36:37.583140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:52.286 [2024-07-15 18:36:37.583269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.286 [2024-07-15 18:36:37.583270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:52.853 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:52.853 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:52.853 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:52.853 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:52.853 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:52.853 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.853 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:52.853 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.853 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:52.853 [2024-07-15 18:36:38.267226] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.853 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.853 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:52.853 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:52.853 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:52.853 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:52.853 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:52.853 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:52.853 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:52.853 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:52.853 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:52.853 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:52.853 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:52.853 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:52.853 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:52.853 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:52.853 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:52.854 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:52.854 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:52.854 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:52.854 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:52.854 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:52.854 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:52.854 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:52.854 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:52.854 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:52.854 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:52.854 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:52.854 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.854 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:52.854 Malloc1 00:21:52.854 [2024-07-15 18:36:38.358524] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.854 Malloc2 00:21:53.112 Malloc3 00:21:53.112 Malloc4 00:21:53.112 Malloc5 00:21:53.112 Malloc6 00:21:53.112 Malloc7 00:21:53.112 Malloc8 00:21:53.372 Malloc9 00:21:53.372 Malloc10 00:21:53.372 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.372 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:53.372 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:53.372 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:53.372 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3981042 00:21:53.372 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3981042 /var/tmp/bdevperf.sock 00:21:53.372 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3981042 ']' 00:21:53.372 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:53.372 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:53.372 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:53.372 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:53.372 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:53.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:53.372 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:21:53.372 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:53.372 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:53.372 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:21:53.372 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:53.372 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:53.372 { 00:21:53.372 "params": { 00:21:53.372 "name": "Nvme$subsystem", 00:21:53.372 "trtype": "$TEST_TRANSPORT", 00:21:53.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.372 "adrfam": "ipv4", 00:21:53.372 "trsvcid": "$NVMF_PORT", 00:21:53.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.372 "hdgst": ${hdgst:-false}, 00:21:53.372 "ddgst": ${ddgst:-false} 00:21:53.372 }, 00:21:53.372 "method": "bdev_nvme_attach_controller" 00:21:53.372 } 00:21:53.372 EOF 00:21:53.372 )") 00:21:53.372 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:53.372 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:53.372 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:53.372 { 00:21:53.372 "params": { 00:21:53.372 "name": "Nvme$subsystem", 00:21:53.372 "trtype": "$TEST_TRANSPORT", 00:21:53.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.372 "adrfam": "ipv4", 00:21:53.372 "trsvcid": "$NVMF_PORT", 00:21:53.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.372 "hdgst": ${hdgst:-false}, 00:21:53.372 "ddgst": ${ddgst:-false} 00:21:53.372 }, 00:21:53.372 "method": "bdev_nvme_attach_controller" 00:21:53.372 } 00:21:53.372 EOF 00:21:53.373 )") 00:21:53.373 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:53.373 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:53.373 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:53.373 { 00:21:53.373 "params": { 00:21:53.373 "name": "Nvme$subsystem", 00:21:53.373 "trtype": "$TEST_TRANSPORT", 00:21:53.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.373 "adrfam": "ipv4", 00:21:53.373 "trsvcid": "$NVMF_PORT", 00:21:53.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.373 "hdgst": ${hdgst:-false}, 00:21:53.373 "ddgst": ${ddgst:-false} 00:21:53.373 }, 00:21:53.373 "method": "bdev_nvme_attach_controller" 00:21:53.373 } 00:21:53.373 EOF 00:21:53.373 )") 00:21:53.373 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:53.373 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:53.373 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:53.373 { 00:21:53.373 "params": { 00:21:53.373 "name": "Nvme$subsystem", 00:21:53.373 "trtype": "$TEST_TRANSPORT", 00:21:53.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.373 "adrfam": "ipv4", 00:21:53.373 "trsvcid": "$NVMF_PORT", 00:21:53.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.373 "hdgst": ${hdgst:-false}, 00:21:53.373 "ddgst": ${ddgst:-false} 00:21:53.373 }, 00:21:53.373 "method": "bdev_nvme_attach_controller" 00:21:53.373 } 00:21:53.373 EOF 00:21:53.373 )") 00:21:53.373 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:53.373 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:53.373 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:53.373 { 00:21:53.373 "params": { 00:21:53.373 "name": "Nvme$subsystem", 00:21:53.373 "trtype": "$TEST_TRANSPORT", 00:21:53.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.373 "adrfam": "ipv4", 00:21:53.373 "trsvcid": "$NVMF_PORT", 00:21:53.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.373 "hdgst": ${hdgst:-false}, 00:21:53.373 "ddgst": ${ddgst:-false} 00:21:53.373 }, 00:21:53.373 "method": "bdev_nvme_attach_controller" 00:21:53.373 } 00:21:53.373 EOF 00:21:53.373 )") 00:21:53.373 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:53.373 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:53.373 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:53.373 { 00:21:53.373 "params": { 00:21:53.373 "name": "Nvme$subsystem", 00:21:53.373 "trtype": "$TEST_TRANSPORT", 00:21:53.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.373 "adrfam": "ipv4", 00:21:53.373 "trsvcid": "$NVMF_PORT", 00:21:53.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.373 "hdgst": ${hdgst:-false}, 00:21:53.373 "ddgst": ${ddgst:-false} 00:21:53.373 }, 00:21:53.373 "method": "bdev_nvme_attach_controller" 00:21:53.373 } 00:21:53.373 EOF 00:21:53.373 )") 00:21:53.373 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:53.373 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:53.373 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:53.373 { 00:21:53.373 "params": { 00:21:53.373 "name": "Nvme$subsystem", 00:21:53.373 "trtype": "$TEST_TRANSPORT", 00:21:53.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.373 "adrfam": "ipv4", 00:21:53.373 "trsvcid": "$NVMF_PORT", 00:21:53.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.373 "hdgst": ${hdgst:-false}, 00:21:53.373 "ddgst": ${ddgst:-false} 00:21:53.373 }, 00:21:53.373 "method": "bdev_nvme_attach_controller" 00:21:53.373 } 00:21:53.373 EOF 00:21:53.373 )") 00:21:53.373 [2024-07-15 18:36:38.835976] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:21:53.373 [2024-07-15 18:36:38.836021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3981042 ] 00:21:53.373 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:53.373 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:53.373 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:53.373 { 00:21:53.373 "params": { 00:21:53.373 "name": "Nvme$subsystem", 00:21:53.373 "trtype": "$TEST_TRANSPORT", 00:21:53.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.373 "adrfam": "ipv4", 00:21:53.373 "trsvcid": "$NVMF_PORT", 00:21:53.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.373 "hdgst": ${hdgst:-false}, 00:21:53.373 "ddgst": ${ddgst:-false} 00:21:53.373 }, 00:21:53.373 "method": "bdev_nvme_attach_controller" 00:21:53.373 } 00:21:53.373 EOF 00:21:53.373 )") 00:21:53.373 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:53.373 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:53.373 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:53.373 { 00:21:53.373 "params": { 00:21:53.373 "name": "Nvme$subsystem", 00:21:53.373 "trtype": "$TEST_TRANSPORT", 00:21:53.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.373 "adrfam": "ipv4", 00:21:53.373 "trsvcid": "$NVMF_PORT", 00:21:53.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.373 "hdgst": ${hdgst:-false}, 00:21:53.373 "ddgst": ${ddgst:-false} 00:21:53.373 }, 00:21:53.373 "method": "bdev_nvme_attach_controller" 00:21:53.373 } 00:21:53.373 EOF 00:21:53.373 )") 00:21:53.373 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:53.373 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:53.373 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:53.373 { 00:21:53.373 "params": { 00:21:53.373 "name": "Nvme$subsystem", 00:21:53.373 "trtype": "$TEST_TRANSPORT", 00:21:53.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.373 "adrfam": "ipv4", 00:21:53.373 "trsvcid": "$NVMF_PORT", 00:21:53.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.373 "hdgst": ${hdgst:-false}, 00:21:53.373 "ddgst": ${ddgst:-false} 00:21:53.373 }, 00:21:53.373 "method": "bdev_nvme_attach_controller" 00:21:53.373 } 00:21:53.373 EOF 00:21:53.373 )") 00:21:53.373 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:53.373 EAL: No free 2048 kB hugepages reported on node 1 00:21:53.373 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:21:53.373 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:21:53.373 18:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:53.373 "params": { 00:21:53.373 "name": "Nvme1", 00:21:53.373 "trtype": "tcp", 00:21:53.373 "traddr": "10.0.0.2", 00:21:53.373 "adrfam": "ipv4", 00:21:53.373 "trsvcid": "4420", 00:21:53.373 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.373 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:53.373 "hdgst": false, 00:21:53.373 "ddgst": false 00:21:53.373 }, 00:21:53.373 "method": "bdev_nvme_attach_controller" 00:21:53.373 },{ 00:21:53.373 "params": { 00:21:53.373 "name": "Nvme2", 00:21:53.373 "trtype": "tcp", 00:21:53.373 "traddr": "10.0.0.2", 00:21:53.373 "adrfam": "ipv4", 00:21:53.373 "trsvcid": "4420", 00:21:53.373 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:53.373 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:53.373 "hdgst": false, 00:21:53.373 "ddgst": false 00:21:53.373 }, 00:21:53.373 "method": "bdev_nvme_attach_controller" 00:21:53.373 },{ 00:21:53.373 "params": { 00:21:53.373 "name": "Nvme3", 00:21:53.373 "trtype": "tcp", 00:21:53.373 "traddr": "10.0.0.2", 00:21:53.373 "adrfam": "ipv4", 00:21:53.373 "trsvcid": "4420", 00:21:53.373 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:53.373 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:53.373 "hdgst": false, 00:21:53.373 "ddgst": false 00:21:53.374 }, 00:21:53.374 "method": "bdev_nvme_attach_controller" 00:21:53.374 },{ 00:21:53.374 "params": { 00:21:53.374 "name": "Nvme4", 00:21:53.374 "trtype": "tcp", 00:21:53.374 "traddr": "10.0.0.2", 00:21:53.374 "adrfam": "ipv4", 00:21:53.374 "trsvcid": "4420", 00:21:53.374 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:53.374 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:53.374 "hdgst": false, 00:21:53.374 "ddgst": false 00:21:53.374 }, 00:21:53.374 "method": "bdev_nvme_attach_controller" 00:21:53.374 },{ 00:21:53.374 "params": { 00:21:53.374 "name": "Nvme5", 00:21:53.374 "trtype": "tcp", 00:21:53.374 "traddr": "10.0.0.2", 00:21:53.374 "adrfam": "ipv4", 00:21:53.374 "trsvcid": "4420", 00:21:53.374 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:53.374 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:53.374 "hdgst": false, 00:21:53.374 "ddgst": false 00:21:53.374 }, 00:21:53.374 "method": "bdev_nvme_attach_controller" 00:21:53.374 },{ 00:21:53.374 "params": { 00:21:53.374 "name": "Nvme6", 00:21:53.374 "trtype": "tcp", 00:21:53.374 "traddr": "10.0.0.2", 00:21:53.374 "adrfam": "ipv4", 00:21:53.374 "trsvcid": "4420", 00:21:53.374 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:53.374 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:53.374 "hdgst": false, 00:21:53.374 "ddgst": false 00:21:53.374 }, 00:21:53.374 "method": "bdev_nvme_attach_controller" 00:21:53.374 },{ 00:21:53.374 "params": { 00:21:53.374 "name": "Nvme7", 00:21:53.374 "trtype": "tcp", 00:21:53.374 "traddr": "10.0.0.2", 00:21:53.374 "adrfam": "ipv4", 00:21:53.374 "trsvcid": "4420", 00:21:53.374 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:53.374 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:53.374 "hdgst": false, 00:21:53.374 "ddgst": false 00:21:53.374 }, 00:21:53.374 "method": "bdev_nvme_attach_controller" 00:21:53.374 },{ 00:21:53.374 "params": { 00:21:53.374 "name": "Nvme8", 00:21:53.374 "trtype": "tcp", 00:21:53.374 "traddr": "10.0.0.2", 00:21:53.374 "adrfam": "ipv4", 00:21:53.374 "trsvcid": "4420", 00:21:53.374 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:53.374 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:53.374 "hdgst": false, 00:21:53.374 "ddgst": false 00:21:53.374 }, 00:21:53.374 "method": "bdev_nvme_attach_controller" 00:21:53.374 },{ 00:21:53.374 "params": { 00:21:53.374 "name": "Nvme9", 00:21:53.374 "trtype": "tcp", 00:21:53.374 "traddr": "10.0.0.2", 00:21:53.374 "adrfam": "ipv4", 00:21:53.374 "trsvcid": "4420", 00:21:53.374 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:53.374 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:53.374 "hdgst": false, 00:21:53.374 "ddgst": false 00:21:53.374 }, 00:21:53.374 "method": "bdev_nvme_attach_controller" 00:21:53.374 },{ 00:21:53.374 "params": { 00:21:53.374 "name": "Nvme10", 00:21:53.374 "trtype": "tcp", 00:21:53.374 "traddr": "10.0.0.2", 00:21:53.374 "adrfam": "ipv4", 00:21:53.374 "trsvcid": "4420", 00:21:53.374 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:53.374 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:53.374 "hdgst": false, 00:21:53.374 "ddgst": false 00:21:53.374 }, 00:21:53.374 "method": "bdev_nvme_attach_controller" 00:21:53.374 }' 00:21:53.374 [2024-07-15 18:36:38.901243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.633 [2024-07-15 18:36:38.974361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.536 Running I/O for 10 seconds... 00:21:55.536 18:36:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:55.536 18:36:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:55.536 18:36:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:55.536 18:36:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.536 18:36:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:55.536 18:36:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.536 18:36:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:55.536 18:36:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:55.536 18:36:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:55.536 18:36:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:55.536 18:36:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:21:55.536 18:36:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:21:55.536 18:36:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:55.537 18:36:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:55.537 18:36:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:55.537 18:36:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.537 18:36:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:55.537 18:36:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:55.537 18:36:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.537 18:36:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:55.537 18:36:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:55.537 18:36:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:55.537 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:55.537 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:55.537 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:55.537 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:55.537 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.537 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:55.796 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.796 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=87 00:21:55.796 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 87 -ge 100 ']' 00:21:55.796 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:56.069 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:56.069 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:56.069 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:56.069 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:56.069 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.069 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:56.069 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.069 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=195 00:21:56.069 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:21:56.069 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:21:56.069 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:21:56.069 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:21:56.069 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3980763 00:21:56.069 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 3980763 ']' 00:21:56.069 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 3980763 00:21:56.069 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:21:56.069 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:56.069 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3980763 00:21:56.069 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:56.069 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:56.069 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3980763' 00:21:56.069 killing process with pid 3980763 00:21:56.069 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 3980763 00:21:56.069 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 3980763 00:21:56.069 [2024-07-15 18:36:41.478136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.069 [2024-07-15 18:36:41.478187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.069 [2024-07-15 18:36:41.478196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.069 [2024-07-15 18:36:41.478202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.069 [2024-07-15 18:36:41.478208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.069 [2024-07-15 18:36:41.478215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.069 [2024-07-15 18:36:41.478222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.069 [2024-07-15 18:36:41.478228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.069 [2024-07-15 18:36:41.478234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.069 [2024-07-15 18:36:41.478240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.069 [2024-07-15 18:36:41.478246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.069 [2024-07-15 18:36:41.478252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.069 [2024-07-15 18:36:41.478258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.069 [2024-07-15 18:36:41.478264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478355] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478397] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478520] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.478574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4430 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480688] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.070 [2024-07-15 18:36:41.480829] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.480835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.480841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.480847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.480855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.480861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.480867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.480873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.480881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.480887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.480893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.480899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.480905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.480912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.480918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.480924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.480930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.480937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.480943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.480948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.480954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.480959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.480965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.480972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea48d0 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482227] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.482448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4d70 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.483323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.483350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.483358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.071 [2024-07-15 18:36:41.483365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.483374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.483380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.483386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.483392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.483398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.483405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.483410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.483416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.483422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.483428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.483434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.483440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.483447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.483453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.483459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.483464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.483470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.483476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.483482] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.483487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.483493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.483499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.483505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.483511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.483516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.483522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.483528] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.483535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.483541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.483548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5230 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.072 [2024-07-15 18:36:41.484453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.484459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.484465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.484471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea56f0 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.485958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.485971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.485977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.485983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.485988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.485994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486066] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486121] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.486316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6030 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.487259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.487272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.487278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.487284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.487290] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.487298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.487304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.487313] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.073 [2024-07-15 18:36:41.487319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with t[2024-07-15 18:36:41.487406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:12he state(5) to be set 00:21:56.074 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-07-15 18:36:41.487431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-07-15 18:36:41.487444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with t[2024-07-15 18:36:41.487457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:12he state(5) to be set 00:21:56.074 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-07-15 18:36:41.487467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-07-15 18:36:41.487476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-07-15 18:36:41.487483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-07-15 18:36:41.487490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-07-15 18:36:41.487504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-07-15 18:36:41.487510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-07-15 18:36:41.487517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 18:36:41.487523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 he state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-07-15 18:36:41.487538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-07-15 18:36:41.487544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-07-15 18:36:41.487551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-07-15 18:36:41.487558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with t[2024-07-15 18:36:41.487566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:1he state(5) to be set 00:21:56.074 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-07-15 18:36:41.487573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with t[2024-07-15 18:36:41.487575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:21:56.074 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-07-15 18:36:41.487583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-07-15 18:36:41.487590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-07-15 18:36:41.487597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-07-15 18:36:41.487604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-07-15 18:36:41.487610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with t[2024-07-15 18:36:41.487618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:1he state(5) to be set 00:21:56.074 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-07-15 18:36:41.487625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-07-15 18:36:41.487632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-07-15 18:36:41.487639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-07-15 18:36:41.487646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-07-15 18:36:41.487658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-07-15 18:36:41.487667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-07-15 18:36:41.487674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-07-15 18:36:41.487683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-07-15 18:36:41.487690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.074 [2024-07-15 18:36:41.487697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea64d0 is same with the state(5) to be set 00:21:56.074 [2024-07-15 18:36:41.487703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.074 [2024-07-15 18:36:41.487710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.487718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.487728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.487736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.487742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.487750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.487756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.487764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.487772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.487780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.487786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.487794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.487800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.487808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.487814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.487822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.487828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.487837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.487847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.487856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.487863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.487871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.487877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.487885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.487892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.487899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.487905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.487912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.487918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.487926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.487932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.487940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.487946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.487953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.487961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.487968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.487975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.487982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.487989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.487996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.488004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.488011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.488017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.488027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.488033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.488041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.488047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.488054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.488060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.488068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.488074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.488082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.488088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.488096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.488102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.488109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.488115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.488123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.488128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.488136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.488143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.488151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.488157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.488164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.488170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.488178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.488186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.488194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.488202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.488210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.488216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.488224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.488231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.488239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.488245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.488253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.488259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.488255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.075 [2024-07-15 18:36:41.488267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.488274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with t[2024-07-15 18:36:41.488275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:21:56.075 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.488286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 [2024-07-15 18:36:41.488287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.075 [2024-07-15 18:36:41.488292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.075 [2024-07-15 18:36:41.488294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.075 [2024-07-15 18:36:41.488301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:1[2024-07-15 18:36:41.488302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.075 he state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 18:36:41.488310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 he state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.076 [2024-07-15 18:36:41.488325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-07-15 18:36:41.488331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.076 [2024-07-15 18:36:41.488347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-07-15 18:36:41.488354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.076 [2024-07-15 18:36:41.488361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-07-15 18:36:41.488367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.076 [2024-07-15 18:36:41.488380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-07-15 18:36:41.488386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.076 [2024-07-15 18:36:41.488393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 18:36:41.488400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 he state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.076 [2024-07-15 18:36:41.488414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-07-15 18:36:41.488420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128[2024-07-15 18:36:41.488427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.076 he state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with t[2024-07-15 18:36:41.488435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:21:56.076 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-07-15 18:36:41.488444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:56.076 [2024-07-15 18:36:41.488467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488528] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea6970 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488859] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1315ef0 was disconnected and freed. reset controller. 00:21:56.076 [2024-07-15 18:36:41.488917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.076 [2024-07-15 18:36:41.488927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-07-15 18:36:41.488935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.076 [2024-07-15 18:36:41.488941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-07-15 18:36:41.488948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.076 [2024-07-15 18:36:41.488954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-07-15 18:36:41.488961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.076 [2024-07-15 18:36:41.488967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-07-15 18:36:41.488973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232bf0 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.488998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.076 [2024-07-15 18:36:41.489005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-07-15 18:36:41.489012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.076 [2024-07-15 18:36:41.489018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-07-15 18:36:41.489025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.076 [2024-07-15 18:36:41.489031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-07-15 18:36:41.489038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.076 [2024-07-15 18:36:41.489046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.076 [2024-07-15 18:36:41.489053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a08b0 is same with the state(5) to be set 00:21:56.076 [2024-07-15 18:36:41.489071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-07-15 18:36:41.489078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.489085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-07-15 18:36:41.489091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.489098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-07-15 18:36:41.489104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.489111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-07-15 18:36:41.489118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.489123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b70d0 is same with the state(5) to be set 00:21:56.077 [2024-07-15 18:36:41.489147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-07-15 18:36:41.489155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.489162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-07-15 18:36:41.489168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.489176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-07-15 18:36:41.489183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.489190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-07-15 18:36:41.489196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.489202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122fb30 is same with the state(5) to be set 00:21:56.077 [2024-07-15 18:36:41.489224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-07-15 18:36:41.489232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.489239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-07-15 18:36:41.489245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.489252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-07-15 18:36:41.489258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.489266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-07-15 18:36:41.489273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.489279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a340 is same with the state(5) to be set 00:21:56.077 [2024-07-15 18:36:41.489300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-07-15 18:36:41.489307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.489314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-07-15 18:36:41.489320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.489327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-07-15 18:36:41.489333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.489346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-07-15 18:36:41.489352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.489358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0050 is same with the state(5) to be set 00:21:56.077 [2024-07-15 18:36:41.489380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-07-15 18:36:41.489388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.489394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-07-15 18:36:41.489400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.489407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-07-15 18:36:41.489414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.489421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-07-15 18:36:41.489427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.489433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ebc70 is same with the state(5) to be set 00:21:56.077 [2024-07-15 18:36:41.489454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-07-15 18:36:41.489462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.489469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-07-15 18:36:41.489475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.489482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-07-15 18:36:41.489490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.489497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-07-15 18:36:41.489503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.489509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b78d0 is same with the state(5) to be set 00:21:56.077 [2024-07-15 18:36:41.489529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-07-15 18:36:41.489537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.489544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-07-15 18:36:41.489550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.489557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-07-15 18:36:41.489563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.489570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-07-15 18:36:41.489576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.489582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12281d0 is same with the state(5) to be set 00:21:56.077 [2024-07-15 18:36:41.489603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-07-15 18:36:41.489612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.489619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-07-15 18:36:41.489625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.489632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-07-15 18:36:41.489638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.489644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.077 [2024-07-15 18:36:41.489651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.489656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x120e190 is same with the state(5) to be set 00:21:56.077 [2024-07-15 18:36:41.490096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-07-15 18:36:41.490118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.490130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-07-15 18:36:41.490140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.490149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-07-15 18:36:41.490156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.490163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-07-15 18:36:41.490170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.490178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-07-15 18:36:41.490184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.490192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-07-15 18:36:41.490198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.077 [2024-07-15 18:36:41.490206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.077 [2024-07-15 18:36:41.490212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.490220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.490226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.490234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.490243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.490250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.490256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.490264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.490270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.490278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.490284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.490292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.490298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.490305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.490311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.490321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.490327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.490335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.490349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.490356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.490362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.490370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.490376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.490384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.490390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.490398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.490404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.490412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.490418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.490426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.490432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.490440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.490446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.490453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.490460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.490467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.490475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.490483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.490489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.490496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.490504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.490511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.490517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.490525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.490531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.490539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.504705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.504732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.504741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.504752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.504761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.504771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.504780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.504790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.504799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.504810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.504819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.504830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.504838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.504850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.504858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.504869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.504877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.504888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.504896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.504911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.504920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.504931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.504940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.504951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.504960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.504971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.078 [2024-07-15 18:36:41.504980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.078 [2024-07-15 18:36:41.504990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.504998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.505009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.505018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.505029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.505038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.505048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.505057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.505067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.505075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.505086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.505094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.505104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.505113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.505123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.505132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.505142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.505153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.505165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.505173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.505184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.505192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.505202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.505211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.505222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.505231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.505241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.505250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.505260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.505269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.505279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.505288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.505298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.505307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.505318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.505326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.505342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.505351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.505361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.505370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.505380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.505389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.505437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:56.079 [2024-07-15 18:36:41.505502] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x132d910 was disconnected and freed. reset controller. 00:21:56.079 [2024-07-15 18:36:41.505982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.505999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.506016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.506025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.506036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.506044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.506055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.506064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.506075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.506083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.506094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.506102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.506112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.506121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.506131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.506139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.506149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.506158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.506168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.506177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.506187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.506195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.506206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.506219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.506229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.506238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.506248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.506257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.506268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.506276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.506286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.506295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.506306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.506315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.506325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.506334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.506351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.506360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.506370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.506379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.079 [2024-07-15 18:36:41.506390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.079 [2024-07-15 18:36:41.506398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.506409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.506417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.506428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.506437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.506447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.506456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.506468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.506477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.506487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.506496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.506506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.506515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.506525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.506534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.506544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.506554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.506564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.506572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.506583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.506592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.506602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.506610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.506621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.506629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.506639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.506648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.506658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.506668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.506678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.506687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.506697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.506708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.506718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.506726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.506736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.506745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.506755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.506764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.506774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.506783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.506793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.506802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.506812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.506821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.506831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.506839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.506850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.506858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.506869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.506877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.506888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.506896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.506906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.506915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.506925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.506933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.506945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.506953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.506967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.506975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.506986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.506994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.507005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.507014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.507024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.507033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.507043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.507052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.507062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.507071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.507081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.507090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.507101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.507109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.507120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.507128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.507138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.507147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.507158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.507166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.507177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.080 [2024-07-15 18:36:41.507190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.080 [2024-07-15 18:36:41.507201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.081 [2024-07-15 18:36:41.507210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.081 [2024-07-15 18:36:41.507220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.081 [2024-07-15 18:36:41.507228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.081 [2024-07-15 18:36:41.507251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:56.081 [2024-07-15 18:36:41.507307] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1314a60 was disconnected and freed. reset controller. 00:21:56.081 [2024-07-15 18:36:41.508532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1232bf0 (9): Bad file descriptor 00:21:56.081 [2024-07-15 18:36:41.508562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a08b0 (9): Bad file descriptor 00:21:56.081 [2024-07-15 18:36:41.508578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b70d0 (9): Bad file descriptor 00:21:56.081 [2024-07-15 18:36:41.508591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x122fb30 (9): Bad file descriptor 00:21:56.081 [2024-07-15 18:36:41.508605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd3a340 (9): Bad file descriptor 00:21:56.081 [2024-07-15 18:36:41.508624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c0050 (9): Bad file descriptor 00:21:56.081 [2024-07-15 18:36:41.508642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ebc70 (9): Bad file descriptor 00:21:56.081 [2024-07-15 18:36:41.508658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b78d0 (9): Bad file descriptor 00:21:56.081 [2024-07-15 18:36:41.508675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12281d0 (9): Bad file descriptor 00:21:56.081 [2024-07-15 18:36:41.508695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x120e190 (9): Bad file descriptor 00:21:56.081 [2024-07-15 18:36:41.511759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:56.081 [2024-07-15 18:36:41.512544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:56.081 [2024-07-15 18:36:41.512574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:56.081 [2024-07-15 18:36:41.512871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.081 [2024-07-15 18:36:41.512889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a08b0 with addr=10.0.0.2, port=4420 00:21:56.081 [2024-07-15 18:36:41.512899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a08b0 is same with the state(5) to be set 00:21:56.081 [2024-07-15 18:36:41.513994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.081 [2024-07-15 18:36:41.514019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x122fb30 with addr=10.0.0.2, port=4420 00:21:56.081 [2024-07-15 18:36:41.514030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122fb30 is same with the state(5) to be set 00:21:56.081 [2024-07-15 18:36:41.514194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.081 [2024-07-15 18:36:41.514207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b70d0 with addr=10.0.0.2, port=4420 00:21:56.081 [2024-07-15 18:36:41.514221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b70d0 is same with the state(5) to be set 00:21:56.081 [2024-07-15 18:36:41.514233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a08b0 (9): Bad file descriptor 00:21:56.081 [2024-07-15 18:36:41.514284] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:56.081 [2024-07-15 18:36:41.514344] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:56.081 [2024-07-15 18:36:41.514395] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:56.081 [2024-07-15 18:36:41.514438] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:56.081 [2024-07-15 18:36:41.514544] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:56.081 [2024-07-15 18:36:41.514595] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:56.081 [2024-07-15 18:36:41.514632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x122fb30 (9): Bad file descriptor 00:21:56.081 [2024-07-15 18:36:41.514647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b70d0 (9): Bad file descriptor 00:21:56.081 [2024-07-15 18:36:41.514657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:56.081 [2024-07-15 18:36:41.514665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:56.081 [2024-07-15 18:36:41.514675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:56.081 [2024-07-15 18:36:41.514776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:56.081 [2024-07-15 18:36:41.514788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:56.081 [2024-07-15 18:36:41.514796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:56.081 [2024-07-15 18:36:41.514804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:56.081 [2024-07-15 18:36:41.514816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:56.081 [2024-07-15 18:36:41.514824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:56.081 [2024-07-15 18:36:41.514832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:56.081 [2024-07-15 18:36:41.514888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.081 [2024-07-15 18:36:41.514900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.081 [2024-07-15 18:36:41.514914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.081 [2024-07-15 18:36:41.514923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.081 [2024-07-15 18:36:41.514935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.081 [2024-07-15 18:36:41.514944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.081 [2024-07-15 18:36:41.514954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.081 [2024-07-15 18:36:41.514963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.081 [2024-07-15 18:36:41.514974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.081 [2024-07-15 18:36:41.514983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.081 [2024-07-15 18:36:41.514997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.081 [2024-07-15 18:36:41.515006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.081 [2024-07-15 18:36:41.515017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.081 [2024-07-15 18:36:41.515026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.081 [2024-07-15 18:36:41.515036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.081 [2024-07-15 18:36:41.515045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.081 [2024-07-15 18:36:41.515056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.081 [2024-07-15 18:36:41.515064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.081 [2024-07-15 18:36:41.515075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.081 [2024-07-15 18:36:41.515083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.081 [2024-07-15 18:36:41.515094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.081 [2024-07-15 18:36:41.515103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.081 [2024-07-15 18:36:41.515113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.081 [2024-07-15 18:36:41.515122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.081 [2024-07-15 18:36:41.515132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.081 [2024-07-15 18:36:41.515141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.081 [2024-07-15 18:36:41.515153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.081 [2024-07-15 18:36:41.515161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.081 [2024-07-15 18:36:41.515172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.081 [2024-07-15 18:36:41.515180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.081 [2024-07-15 18:36:41.515191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.081 [2024-07-15 18:36:41.515200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.081 [2024-07-15 18:36:41.515211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.081 [2024-07-15 18:36:41.515220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.081 [2024-07-15 18:36:41.515231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.081 [2024-07-15 18:36:41.515241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.081 [2024-07-15 18:36:41.515252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.081 [2024-07-15 18:36:41.515260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.081 [2024-07-15 18:36:41.515271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.081 [2024-07-15 18:36:41.515280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.081 [2024-07-15 18:36:41.515290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.081 [2024-07-15 18:36:41.515299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.081 [2024-07-15 18:36:41.515309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.081 [2024-07-15 18:36:41.515318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.081 [2024-07-15 18:36:41.515329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.515984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.515995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.516004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.516014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.082 [2024-07-15 18:36:41.516023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.082 [2024-07-15 18:36:41.516033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.083 [2024-07-15 18:36:41.516042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.083 [2024-07-15 18:36:41.516052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.083 [2024-07-15 18:36:41.516061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.083 [2024-07-15 18:36:41.516071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.083 [2024-07-15 18:36:41.516080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.083 [2024-07-15 18:36:41.516091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.083 [2024-07-15 18:36:41.516099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.083 [2024-07-15 18:36:41.516110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.083 [2024-07-15 18:36:41.516119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.083 [2024-07-15 18:36:41.516129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.083 [2024-07-15 18:36:41.516138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.083 [2024-07-15 18:36:41.516147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7040 is same with the state(5) to be set 00:21:56.083 [2024-07-15 18:36:41.516207] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11e7040 was disconnected and freed. reset controller. 00:21:56.083 [2024-07-15 18:36:41.516234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:56.083 [2024-07-15 18:36:41.516243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:56.083 [2024-07-15 18:36:41.517494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:56.083 [2024-07-15 18:36:41.517754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.083 [2024-07-15 18:36:41.517773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3a340 with addr=10.0.0.2, port=4420 00:21:56.083 [2024-07-15 18:36:41.517782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a340 is same with the state(5) to be set 00:21:56.083 [2024-07-15 18:36:41.518105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd3a340 (9): Bad file descriptor 00:21:56.083 [2024-07-15 18:36:41.518154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:56.083 [2024-07-15 18:36:41.518164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:56.083 [2024-07-15 18:36:41.518173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:56.083 [2024-07-15 18:36:41.518222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:56.083 [2024-07-15 18:36:41.518663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.083 [2024-07-15 18:36:41.518678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.083 [2024-07-15 18:36:41.518692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.083 [2024-07-15 18:36:41.518701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.083 [2024-07-15 18:36:41.518713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.083 [2024-07-15 18:36:41.518721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.083 [2024-07-15 18:36:41.518732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.083 [2024-07-15 18:36:41.518741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.083 [2024-07-15 18:36:41.518752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.083 [2024-07-15 18:36:41.518760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.083 [2024-07-15 18:36:41.518771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.083 [2024-07-15 18:36:41.518780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.083 [2024-07-15 18:36:41.518791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.083 [2024-07-15 18:36:41.518799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.083 [2024-07-15 18:36:41.518810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.083 [2024-07-15 18:36:41.518819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.083 [2024-07-15 18:36:41.518830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.083 [2024-07-15 18:36:41.518838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.083 [2024-07-15 18:36:41.518849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.083 [2024-07-15 18:36:41.518857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.083 [2024-07-15 18:36:41.518868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.083 [2024-07-15 18:36:41.518880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.083 [2024-07-15 18:36:41.518890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.083 [2024-07-15 18:36:41.518899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.083 [2024-07-15 18:36:41.518910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.083 [2024-07-15 18:36:41.518918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.083 [2024-07-15 18:36:41.518929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.083 [2024-07-15 18:36:41.518937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.083 [2024-07-15 18:36:41.518948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.083 [2024-07-15 18:36:41.518956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.083 [2024-07-15 18:36:41.518967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.083 [2024-07-15 18:36:41.518975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.083 [2024-07-15 18:36:41.518986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.083 [2024-07-15 18:36:41.518994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.083 [2024-07-15 18:36:41.519005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.083 [2024-07-15 18:36:41.519014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.083 [2024-07-15 18:36:41.519025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.083 [2024-07-15 18:36:41.519034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.083 [2024-07-15 18:36:41.519044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.083 [2024-07-15 18:36:41.519053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.083 [2024-07-15 18:36:41.519064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.083 [2024-07-15 18:36:41.519073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.083 [2024-07-15 18:36:41.519083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.083 [2024-07-15 18:36:41.519092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.083 [2024-07-15 18:36:41.519102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.083 [2024-07-15 18:36:41.519111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.083 [2024-07-15 18:36:41.519124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.083 [2024-07-15 18:36:41.519132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.083 [2024-07-15 18:36:41.519143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.519914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.519923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132cea0 is same with the state(5) to be set 00:21:56.084 [2024-07-15 18:36:41.521218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.521233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.521247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.084 [2024-07-15 18:36:41.521255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.084 [2024-07-15 18:36:41.521267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.521987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.521998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.522006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.522017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.522026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.085 [2024-07-15 18:36:41.522036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.085 [2024-07-15 18:36:41.522045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.522055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.522064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.522074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.522084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.522094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.522103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.522113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.522122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.522132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.522142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.522153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.522161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.522172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.522180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.522191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.522199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.522210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.522218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.522230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.522239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.522249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.522258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.522269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.522277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.522287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.522296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.522306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.522315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.522326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.522334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.522348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.522357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.522367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.522375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.522388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.522397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.522409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.522417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.522428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.522437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.522447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.522456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.522466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.522475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.522484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b3490 is same with the state(5) to be set 00:21:56.086 [2024-07-15 18:36:41.523548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.523559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.523569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.523576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.523583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.523590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.523598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.523605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.523613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.523619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.523627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.523633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.523642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.523648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.523658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.523664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.523672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.523678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.523687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.523693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.523701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.523708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.523716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.523723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.086 [2024-07-15 18:36:41.523731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.086 [2024-07-15 18:36:41.523737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.523745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.523752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.523760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.523767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.523775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.523782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.523790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.523796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.523804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.523810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.523818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.523824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.523832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.523841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.523848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.523855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.523863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.523870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.523878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.523884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.523892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.523899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.523907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.523914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.523921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.523928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.523936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.523942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.523950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.523956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.523964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.523971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.523979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.523985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.523993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.524000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.524007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.524014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.524023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.524030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.524037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.524043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.524051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.524057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.524065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.524071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.524079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.524086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.524094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.524101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.524108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.524114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.524122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.524129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.524136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.524142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.524150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.524156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.524164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.524171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.524178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.524185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.524192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.524201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.524209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.524215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.524223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.524229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.524237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.524243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.524251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.524257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.524265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.524271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.524279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.524285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.524294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.087 [2024-07-15 18:36:41.524301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.087 [2024-07-15 18:36:41.524308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.524315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.524322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.524328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.524340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.524347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.524355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.524361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.524369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.524376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.524385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.524391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.524399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.524406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.524413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b4920 is same with the state(5) to be set 00:21:56.088 [2024-07-15 18:36:41.525368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.525389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.525404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.525420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.525434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.525448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.525462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.525476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.525491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.525504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.525521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.525536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.525550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.525564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.525578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.525592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.525607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.525622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.525637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.525652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.525667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.525681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.525695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.525711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.525725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.525739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.525753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.525767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.525781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.525796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.525809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.525824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.088 [2024-07-15 18:36:41.525838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.088 [2024-07-15 18:36:41.525844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.525852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.525859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.525867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.525873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.525882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e5b70 is same with the state(5) to be set 00:21:56.089 [2024-07-15 18:36:41.526752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.526763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.526772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.526779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.526788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.526794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.526802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.526808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.526816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.526823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.526831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.526837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.526845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.526851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.526859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.526866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.526873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.526880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.526888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.526894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.526902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.526909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.526917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.526923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.526931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.526940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.526948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.526954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.526962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.526969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.526977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.526983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.526991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.526998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.527006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.527012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.527020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.527027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.527034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.527041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.527049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.527055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.527063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.527069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.527077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.527084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.527092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.527098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.527107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.527113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.527122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.527129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.527137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.527143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.527152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.527158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.527167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.527173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.527181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.527187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.527195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.527201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.527209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.527215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.527223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.527229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.527237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.527243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.527251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.527257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.527265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.527272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.089 [2024-07-15 18:36:41.527280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.089 [2024-07-15 18:36:41.527286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.527294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.527302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.527310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.527316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.527324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.527331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.527342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.527348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.527356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.527363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.527371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.527377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.527386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.527393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.527401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.527408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.527416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.527422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.527430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.527436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.527444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.527451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.527459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.527465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.527473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.527480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.527489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.527496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.527503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.527510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.527518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.527524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.527532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.527538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.527546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.527552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.527560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.527566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.527574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.527580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.527588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.527594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.527602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.527609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.527617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.527623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.527631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.527637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.527646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.527652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.527660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.527669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.527677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.527684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.527691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132ede0 is same with the state(5) to be set 00:21:56.090 [2024-07-15 18:36:41.528667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.528678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.528689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.528695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.528704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.528711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.528719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.528725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.528734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.528741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.528749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.528755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.528763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.528769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.528782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.528789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.528797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.528803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.528812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.528818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.528826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.528835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.528843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.528850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.528858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.090 [2024-07-15 18:36:41.528865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.090 [2024-07-15 18:36:41.528872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.528879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.528887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.528894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.528902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.528908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.528916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.528922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.528931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.528937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.528946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.528952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.528960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.528966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.528974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.528980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.528988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.528995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.529003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.529009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.529018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.529025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.529033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.529039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.529047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.529053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.529061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.529067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.529075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.529082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.529089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.529096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.529103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.529110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.529118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.529124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.529132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.529139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.529146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.529153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.529161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.529167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.529175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.529181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.529189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.529196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.529204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.529211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.529218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.529225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.529233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.529239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.529247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.529253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.529261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.529267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.529275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.529282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.529289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.529295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.529304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.529310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.529318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.529324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.529332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.529342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.529350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.529357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.529365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.529371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.091 [2024-07-15 18:36:41.529381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.091 [2024-07-15 18:36:41.529387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.092 [2024-07-15 18:36:41.529395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.092 [2024-07-15 18:36:41.529402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.092 [2024-07-15 18:36:41.529410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.092 [2024-07-15 18:36:41.529416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.092 [2024-07-15 18:36:41.529424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.092 [2024-07-15 18:36:41.529430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.092 [2024-07-15 18:36:41.529438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.092 [2024-07-15 18:36:41.529445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.092 [2024-07-15 18:36:41.529452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.092 [2024-07-15 18:36:41.529459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.092 [2024-07-15 18:36:41.529467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.092 [2024-07-15 18:36:41.529473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.092 [2024-07-15 18:36:41.529481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.092 [2024-07-15 18:36:41.529487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.092 [2024-07-15 18:36:41.529495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.092 [2024-07-15 18:36:41.529501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.092 [2024-07-15 18:36:41.529509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.092 [2024-07-15 18:36:41.529516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.092 [2024-07-15 18:36:41.529524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.092 [2024-07-15 18:36:41.529530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.092 [2024-07-15 18:36:41.529538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.092 [2024-07-15 18:36:41.529545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.092 [2024-07-15 18:36:41.529552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.092 [2024-07-15 18:36:41.529560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.092 [2024-07-15 18:36:41.529567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.092 [2024-07-15 18:36:41.529574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.092 [2024-07-15 18:36:41.529582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.092 [2024-07-15 18:36:41.529588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.092 [2024-07-15 18:36:41.529596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.092 [2024-07-15 18:36:41.529602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.092 [2024-07-15 18:36:41.529610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13302b0 is same with the state(5) to be set 00:21:56.092 [2024-07-15 18:36:41.531515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:56.092 [2024-07-15 18:36:41.531540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:56.092 [2024-07-15 18:36:41.531548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:56.092 [2024-07-15 18:36:41.531557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:56.092 [2024-07-15 18:36:41.531632] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:56.092 [2024-07-15 18:36:41.531649] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:56.092 [2024-07-15 18:36:41.531899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:56.092 task offset: 25088 on job bdev=Nvme10n1 fails 00:21:56.092 00:21:56.092 Latency(us) 00:21:56.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.092 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:56.092 Job: Nvme1n1 ended in about 0.90 seconds with error 00:21:56.092 Verification LBA range: start 0x0 length 0x400 00:21:56.092 Nvme1n1 : 0.90 212.92 13.31 70.97 0.00 223228.83 19223.89 199728.76 00:21:56.092 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:56.092 Job: Nvme2n1 ended in about 0.90 seconds with error 00:21:56.092 Verification LBA range: start 0x0 length 0x400 00:21:56.092 Nvme2n1 : 0.90 212.32 13.27 70.77 0.00 220024.44 25839.91 208716.56 00:21:56.092 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:56.092 Job: Nvme3n1 ended in about 0.91 seconds with error 00:21:56.092 Verification LBA range: start 0x0 length 0x400 00:21:56.092 Nvme3n1 : 0.91 217.41 13.59 65.11 0.00 216382.54 14542.75 215707.06 00:21:56.092 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:56.092 Job: Nvme4n1 ended in about 0.91 seconds with error 00:21:56.092 Verification LBA range: start 0x0 length 0x400 00:21:56.092 Nvme4n1 : 0.91 243.53 15.22 38.57 0.00 211261.93 14355.50 216705.71 00:21:56.092 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:56.092 Job: Nvme5n1 ended in about 0.90 seconds with error 00:21:56.092 Verification LBA range: start 0x0 length 0x400 00:21:56.092 Nvme5n1 : 0.90 213.79 13.36 71.26 0.00 206931.38 18474.91 207717.91 00:21:56.092 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:56.092 Job: Nvme6n1 ended in about 0.89 seconds with error 00:21:56.092 Verification LBA range: start 0x0 length 0x400 00:21:56.092 Nvme6n1 : 0.89 215.46 13.47 71.82 0.00 201337.17 18350.08 209715.20 00:21:56.092 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:56.092 Job: Nvme7n1 ended in about 0.91 seconds with error 00:21:56.092 Verification LBA range: start 0x0 length 0x400 00:21:56.092 Nvme7n1 : 0.91 211.13 13.20 70.38 0.00 202122.97 16352.79 211712.49 00:21:56.092 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:56.092 Job: Nvme8n1 ended in about 0.91 seconds with error 00:21:56.092 Verification LBA range: start 0x0 length 0x400 00:21:56.092 Nvme8n1 : 0.91 210.68 13.17 70.23 0.00 198732.68 13294.45 210713.84 00:21:56.092 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:56.092 Job: Nvme9n1 ended in about 0.89 seconds with error 00:21:56.092 Verification LBA range: start 0x0 length 0x400 00:21:56.092 Nvme9n1 : 0.89 215.16 13.45 71.72 0.00 190139.98 18599.74 236678.58 00:21:56.092 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:56.092 Job: Nvme10n1 ended in about 0.89 seconds with error 00:21:56.092 Verification LBA range: start 0x0 length 0x400 00:21:56.092 Nvme10n1 : 0.89 215.94 13.50 71.98 0.00 185437.62 17476.27 214708.42 00:21:56.092 =================================================================================================================== 00:21:56.092 Total : 2168.34 135.52 672.81 0.00 205559.95 13294.45 236678.58 00:21:56.092 [2024-07-15 18:36:41.556497] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:56.092 [2024-07-15 18:36:41.556537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:56.092 [2024-07-15 18:36:41.556862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.092 [2024-07-15 18:36:41.556879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ebc70 with addr=10.0.0.2, port=4420 00:21:56.092 [2024-07-15 18:36:41.556888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ebc70 is same with the state(5) to be set 00:21:56.092 [2024-07-15 18:36:41.557129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.092 [2024-07-15 18:36:41.557140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b78d0 with addr=10.0.0.2, port=4420 00:21:56.092 [2024-07-15 18:36:41.557146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b78d0 is same with the state(5) to be set 00:21:56.092 [2024-07-15 18:36:41.557320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.092 [2024-07-15 18:36:41.557329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c0050 with addr=10.0.0.2, port=4420 00:21:56.092 [2024-07-15 18:36:41.557340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0050 is same with the state(5) to be set 00:21:56.092 [2024-07-15 18:36:41.557489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.092 [2024-07-15 18:36:41.557498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12281d0 with addr=10.0.0.2, port=4420 00:21:56.092 [2024-07-15 18:36:41.557505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12281d0 is same with the state(5) to be set 00:21:56.092 [2024-07-15 18:36:41.558852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:56.092 [2024-07-15 18:36:41.558867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:56.092 [2024-07-15 18:36:41.558875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:56.092 [2024-07-15 18:36:41.558883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:56.092 [2024-07-15 18:36:41.559160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.092 [2024-07-15 18:36:41.559173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120e190 with addr=10.0.0.2, port=4420 00:21:56.092 [2024-07-15 18:36:41.559186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x120e190 is same with the state(5) to be set 00:21:56.092 [2024-07-15 18:36:41.559312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.092 [2024-07-15 18:36:41.559321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1232bf0 with addr=10.0.0.2, port=4420 00:21:56.092 [2024-07-15 18:36:41.559328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232bf0 is same with the state(5) to be set 00:21:56.092 [2024-07-15 18:36:41.559344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ebc70 (9): Bad file descriptor 00:21:56.093 [2024-07-15 18:36:41.559355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b78d0 (9): Bad file descriptor 00:21:56.093 [2024-07-15 18:36:41.559363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c0050 (9): Bad file descriptor 00:21:56.093 [2024-07-15 18:36:41.559371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12281d0 (9): Bad file descriptor 00:21:56.093 [2024-07-15 18:36:41.559401] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:56.093 [2024-07-15 18:36:41.559410] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:56.093 [2024-07-15 18:36:41.559422] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:56.093 [2024-07-15 18:36:41.559431] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:56.093 [2024-07-15 18:36:41.559739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.093 [2024-07-15 18:36:41.559751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a08b0 with addr=10.0.0.2, port=4420 00:21:56.093 [2024-07-15 18:36:41.559758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a08b0 is same with the state(5) to be set 00:21:56.093 [2024-07-15 18:36:41.559916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.093 [2024-07-15 18:36:41.559926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b70d0 with addr=10.0.0.2, port=4420 00:21:56.093 [2024-07-15 18:36:41.559932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b70d0 is same with the state(5) to be set 00:21:56.093 [2024-07-15 18:36:41.560167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.093 [2024-07-15 18:36:41.560176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x122fb30 with addr=10.0.0.2, port=4420 00:21:56.093 [2024-07-15 18:36:41.560183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122fb30 is same with the state(5) to be set 00:21:56.093 [2024-07-15 18:36:41.560321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.093 [2024-07-15 18:36:41.560330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3a340 with addr=10.0.0.2, port=4420 00:21:56.093 [2024-07-15 18:36:41.560341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a340 is same with the state(5) to be set 00:21:56.093 [2024-07-15 18:36:41.560350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x120e190 (9): Bad file descriptor 00:21:56.093 [2024-07-15 18:36:41.560358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1232bf0 (9): Bad file descriptor 00:21:56.093 [2024-07-15 18:36:41.560366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:56.093 [2024-07-15 18:36:41.560372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:56.093 [2024-07-15 18:36:41.560380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:56.093 [2024-07-15 18:36:41.560394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:56.093 [2024-07-15 18:36:41.560400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:56.093 [2024-07-15 18:36:41.560406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:56.093 [2024-07-15 18:36:41.560416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:56.093 [2024-07-15 18:36:41.560422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:56.093 [2024-07-15 18:36:41.560427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:56.093 [2024-07-15 18:36:41.560435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:56.093 [2024-07-15 18:36:41.560441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:56.093 [2024-07-15 18:36:41.560447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:56.093 [2024-07-15 18:36:41.560508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:56.093 [2024-07-15 18:36:41.560516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:56.093 [2024-07-15 18:36:41.560521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:56.093 [2024-07-15 18:36:41.560526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:56.093 [2024-07-15 18:36:41.560533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a08b0 (9): Bad file descriptor 00:21:56.093 [2024-07-15 18:36:41.560541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b70d0 (9): Bad file descriptor 00:21:56.093 [2024-07-15 18:36:41.560549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x122fb30 (9): Bad file descriptor 00:21:56.093 [2024-07-15 18:36:41.560558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd3a340 (9): Bad file descriptor 00:21:56.093 [2024-07-15 18:36:41.560564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:56.093 [2024-07-15 18:36:41.560570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:56.093 [2024-07-15 18:36:41.560576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:56.093 [2024-07-15 18:36:41.560583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:21:56.093 [2024-07-15 18:36:41.560588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:21:56.093 [2024-07-15 18:36:41.560594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:56.093 [2024-07-15 18:36:41.560618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:56.093 [2024-07-15 18:36:41.560625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:56.093 [2024-07-15 18:36:41.560630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:56.093 [2024-07-15 18:36:41.560635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:56.093 [2024-07-15 18:36:41.560641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:56.093 [2024-07-15 18:36:41.560649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:56.093 [2024-07-15 18:36:41.560655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:56.093 [2024-07-15 18:36:41.560663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:56.093 [2024-07-15 18:36:41.560671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:56.093 [2024-07-15 18:36:41.560676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:56.093 [2024-07-15 18:36:41.560683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:56.093 [2024-07-15 18:36:41.560690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:56.093 [2024-07-15 18:36:41.560695] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:56.093 [2024-07-15 18:36:41.560701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:56.093 [2024-07-15 18:36:41.560726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:56.093 [2024-07-15 18:36:41.560733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:56.093 [2024-07-15 18:36:41.560738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:56.093 [2024-07-15 18:36:41.560743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:56.353 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:21:56.353 18:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:21:57.733 18:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3981042 00:21:57.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3981042) - No such process 00:21:57.733 18:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:21:57.733 18:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:21:57.733 18:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:57.733 18:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:57.733 18:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:57.733 18:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:57.733 18:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:57.733 18:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:21:57.733 18:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:57.733 18:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:21:57.733 18:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:57.733 18:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:57.733 rmmod nvme_tcp 00:21:57.733 rmmod nvme_fabrics 00:21:57.733 rmmod nvme_keyring 00:21:57.733 18:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:57.733 18:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:21:57.733 18:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:21:57.733 18:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:57.733 18:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:57.733 18:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:57.733 18:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:57.733 18:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:57.733 18:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:57.733 18:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.733 18:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:57.733 18:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.699 18:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:59.699 00:21:59.699 real 0m7.984s 00:21:59.699 user 0m20.002s 00:21:59.699 sys 0m1.310s 00:21:59.699 18:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:59.699 18:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:59.699 ************************************ 00:21:59.699 END TEST nvmf_shutdown_tc3 00:21:59.699 ************************************ 00:21:59.699 18:36:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:59.699 18:36:45 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:21:59.699 00:21:59.699 real 0m31.092s 00:21:59.699 user 1m16.023s 00:21:59.699 sys 0m8.475s 00:21:59.699 18:36:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:59.699 18:36:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:59.699 ************************************ 00:21:59.699 END TEST nvmf_shutdown 00:21:59.699 ************************************ 00:21:59.699 18:36:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:59.699 18:36:45 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:21:59.699 18:36:45 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:59.699 18:36:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:59.699 18:36:45 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:21:59.699 18:36:45 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:59.699 18:36:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:59.699 18:36:45 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:21:59.699 18:36:45 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:59.699 18:36:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:59.699 18:36:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:59.699 18:36:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:59.699 ************************************ 00:21:59.699 START TEST nvmf_multicontroller 00:21:59.699 ************************************ 00:21:59.699 18:36:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:59.958 * Looking for test storage... 00:21:59.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:59.958 18:36:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:59.958 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:59.958 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.958 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.958 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.958 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.958 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.958 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.958 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.958 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.958 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.958 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.958 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:59.958 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:59.958 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.958 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.958 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:59.958 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:21:59.959 18:36:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:05.228 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:05.228 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:05.228 Found net devices under 0000:86:00.0: cvl_0_0 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.228 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:05.229 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:05.229 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.229 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:05.229 Found net devices under 0000:86:00.1: cvl_0_1 00:22:05.229 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.229 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:05.229 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:22:05.229 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:05.229 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:05.229 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:05.229 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:05.229 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:05.229 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:05.229 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:05.229 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:05.229 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:05.229 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:05.229 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:05.229 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:05.229 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:05.229 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:05.229 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:05.229 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:05.488 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:05.488 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:05.488 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:05.488 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:05.488 18:36:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:05.488 18:36:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:05.488 18:36:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:05.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:05.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:22:05.488 00:22:05.488 --- 10.0.0.2 ping statistics --- 00:22:05.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.488 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:22:05.488 18:36:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:05.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:05.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:22:05.488 00:22:05.488 --- 10.0.0.1 ping statistics --- 00:22:05.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.488 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:22:05.488 18:36:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:05.488 18:36:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:22:05.488 18:36:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:05.488 18:36:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:05.488 18:36:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:05.488 18:36:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:05.488 18:36:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:05.488 18:36:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:05.488 18:36:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:05.747 18:36:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:05.747 18:36:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:05.747 18:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:05.747 18:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:05.747 18:36:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3985306 00:22:05.747 18:36:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3985306 00:22:05.747 18:36:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:05.747 18:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3985306 ']' 00:22:05.747 18:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.747 18:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:05.747 18:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.747 18:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:05.747 18:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:05.747 [2024-07-15 18:36:51.114780] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:22:05.747 [2024-07-15 18:36:51.114827] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.747 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.747 [2024-07-15 18:36:51.185607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:05.747 [2024-07-15 18:36:51.264297] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.747 [2024-07-15 18:36:51.264333] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.747 [2024-07-15 18:36:51.264346] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.747 [2024-07-15 18:36:51.264352] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.747 [2024-07-15 18:36:51.264373] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.747 [2024-07-15 18:36:51.264430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.747 [2024-07-15 18:36:51.264557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.747 [2024-07-15 18:36:51.264558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:06.684 18:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:06.684 18:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:22:06.684 18:36:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:06.684 18:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:06.684 18:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.684 18:36:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.684 18:36:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:06.684 18:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.684 18:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.684 [2024-07-15 18:36:51.960519] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.684 18:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.684 18:36:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:06.684 18:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.684 18:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.684 Malloc0 00:22:06.684 18:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.684 18:36:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:06.684 18:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.684 18:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.684 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.684 18:36:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:06.684 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.684 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.684 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.684 18:36:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:06.684 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.684 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.684 [2024-07-15 18:36:52.019893] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.684 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.684 18:36:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:06.684 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.684 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.684 [2024-07-15 18:36:52.027845] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:06.684 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.684 18:36:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:06.684 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.684 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.684 Malloc1 00:22:06.684 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.684 18:36:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:06.684 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.684 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.684 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.684 18:36:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:06.684 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.684 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.684 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.684 18:36:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:06.684 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.684 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.684 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.684 18:36:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:06.685 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.685 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.685 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.685 18:36:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3985444 00:22:06.685 18:36:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:06.685 18:36:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:06.685 18:36:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3985444 /var/tmp/bdevperf.sock 00:22:06.685 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3985444 ']' 00:22:06.685 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:06.685 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:06.685 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:06.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:06.685 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:06.685 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:07.621 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:07.621 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:22:07.621 18:36:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:07.621 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.621 18:36:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:07.621 NVMe0n1 00:22:07.621 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.621 18:36:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:07.621 18:36:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:07.621 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.621 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:07.879 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.879 1 00:22:07.879 18:36:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:07.879 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:07.879 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:07.879 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:07.879 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.879 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:07.879 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.879 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:07.879 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.879 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:07.879 request: 00:22:07.879 { 00:22:07.879 "name": "NVMe0", 00:22:07.879 "trtype": "tcp", 00:22:07.879 "traddr": "10.0.0.2", 00:22:07.879 "adrfam": "ipv4", 00:22:07.879 "trsvcid": "4420", 00:22:07.879 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:07.879 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:07.879 "hostaddr": "10.0.0.2", 00:22:07.879 "hostsvcid": "60000", 00:22:07.879 "prchk_reftag": false, 00:22:07.879 "prchk_guard": false, 00:22:07.879 "hdgst": false, 00:22:07.879 "ddgst": false, 00:22:07.879 "method": "bdev_nvme_attach_controller", 00:22:07.879 "req_id": 1 00:22:07.879 } 00:22:07.879 Got JSON-RPC error response 00:22:07.879 response: 00:22:07.879 { 00:22:07.879 "code": -114, 00:22:07.879 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:07.879 } 00:22:07.879 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:07.879 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:07.879 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:07.879 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:07.879 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:07.879 18:36:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:07.879 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:07.879 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:07.879 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:07.879 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.879 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:07.879 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.879 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:07.879 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.879 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:07.879 request: 00:22:07.879 { 00:22:07.879 "name": "NVMe0", 00:22:07.879 "trtype": "tcp", 00:22:07.879 "traddr": "10.0.0.2", 00:22:07.879 "adrfam": "ipv4", 00:22:07.879 "trsvcid": "4420", 00:22:07.879 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:07.879 "hostaddr": "10.0.0.2", 00:22:07.879 "hostsvcid": "60000", 00:22:07.879 "prchk_reftag": false, 00:22:07.879 "prchk_guard": false, 00:22:07.879 "hdgst": false, 00:22:07.879 "ddgst": false, 00:22:07.879 "method": "bdev_nvme_attach_controller", 00:22:07.879 "req_id": 1 00:22:07.879 } 00:22:07.879 Got JSON-RPC error response 00:22:07.879 response: 00:22:07.879 { 00:22:07.879 "code": -114, 00:22:07.879 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:07.879 } 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:07.880 request: 00:22:07.880 { 00:22:07.880 "name": "NVMe0", 00:22:07.880 "trtype": "tcp", 00:22:07.880 "traddr": "10.0.0.2", 00:22:07.880 "adrfam": "ipv4", 00:22:07.880 "trsvcid": "4420", 00:22:07.880 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:07.880 "hostaddr": "10.0.0.2", 00:22:07.880 "hostsvcid": "60000", 00:22:07.880 "prchk_reftag": false, 00:22:07.880 "prchk_guard": false, 00:22:07.880 "hdgst": false, 00:22:07.880 "ddgst": false, 00:22:07.880 "multipath": "disable", 00:22:07.880 "method": "bdev_nvme_attach_controller", 00:22:07.880 "req_id": 1 00:22:07.880 } 00:22:07.880 Got JSON-RPC error response 00:22:07.880 response: 00:22:07.880 { 00:22:07.880 "code": -114, 00:22:07.880 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:22:07.880 } 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:07.880 request: 00:22:07.880 { 00:22:07.880 "name": "NVMe0", 00:22:07.880 "trtype": "tcp", 00:22:07.880 "traddr": "10.0.0.2", 00:22:07.880 "adrfam": "ipv4", 00:22:07.880 "trsvcid": "4420", 00:22:07.880 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:07.880 "hostaddr": "10.0.0.2", 00:22:07.880 "hostsvcid": "60000", 00:22:07.880 "prchk_reftag": false, 00:22:07.880 "prchk_guard": false, 00:22:07.880 "hdgst": false, 00:22:07.880 "ddgst": false, 00:22:07.880 "multipath": "failover", 00:22:07.880 "method": "bdev_nvme_attach_controller", 00:22:07.880 "req_id": 1 00:22:07.880 } 00:22:07.880 Got JSON-RPC error response 00:22:07.880 response: 00:22:07.880 { 00:22:07.880 "code": -114, 00:22:07.880 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:07.880 } 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.880 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:08.137 00:22:08.138 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.138 18:36:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:08.138 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.138 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:08.138 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.138 18:36:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:08.138 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.138 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:08.396 00:22:08.396 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.396 18:36:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:08.396 18:36:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:08.396 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.396 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:08.396 18:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.396 18:36:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:08.396 18:36:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:09.334 0 00:22:09.334 18:36:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:09.334 18:36:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.334 18:36:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:09.334 18:36:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.334 18:36:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3985444 00:22:09.334 18:36:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3985444 ']' 00:22:09.334 18:36:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3985444 00:22:09.334 18:36:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:22:09.334 18:36:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:09.334 18:36:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3985444 00:22:09.593 18:36:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:09.593 18:36:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:09.593 18:36:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3985444' 00:22:09.593 killing process with pid 3985444 00:22:09.593 18:36:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3985444 00:22:09.593 18:36:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3985444 00:22:09.593 18:36:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:09.593 18:36:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.593 18:36:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:09.593 18:36:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.593 18:36:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:09.593 18:36:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.593 18:36:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:09.593 18:36:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.593 18:36:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:22:09.593 18:36:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:09.593 18:36:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:22:09.593 18:36:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:09.593 18:36:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:22:09.593 18:36:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:22:09.593 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:09.593 [2024-07-15 18:36:52.132188] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:22:09.593 [2024-07-15 18:36:52.132235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3985444 ] 00:22:09.593 EAL: No free 2048 kB hugepages reported on node 1 00:22:09.593 [2024-07-15 18:36:52.199624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.593 [2024-07-15 18:36:52.273449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.594 [2024-07-15 18:36:53.705145] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name f67284fd-6a6a-4e37-941e-2293af5179e2 already exists 00:22:09.594 [2024-07-15 18:36:53.705172] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:f67284fd-6a6a-4e37-941e-2293af5179e2 alias for bdev NVMe1n1 00:22:09.594 [2024-07-15 18:36:53.705180] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:09.594 Running I/O for 1 seconds... 00:22:09.594 00:22:09.594 Latency(us) 00:22:09.594 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:09.594 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:09.594 NVMe0n1 : 1.01 23918.92 93.43 0.00 0.00 5334.34 4899.60 10298.51 00:22:09.594 =================================================================================================================== 00:22:09.594 Total : 23918.92 93.43 0.00 0.00 5334.34 4899.60 10298.51 00:22:09.594 Received shutdown signal, test time was about 1.000000 seconds 00:22:09.594 00:22:09.594 Latency(us) 00:22:09.594 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:09.594 =================================================================================================================== 00:22:09.594 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:09.594 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:09.594 18:36:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:09.594 18:36:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:22:09.594 18:36:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:22:09.594 18:36:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:09.594 18:36:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:22:09.594 18:36:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:09.594 18:36:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:22:09.594 18:36:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:09.594 18:36:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:09.594 rmmod nvme_tcp 00:22:09.594 rmmod nvme_fabrics 00:22:09.853 rmmod nvme_keyring 00:22:09.853 18:36:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:09.853 18:36:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:22:09.853 18:36:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:22:09.853 18:36:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3985306 ']' 00:22:09.853 18:36:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3985306 00:22:09.853 18:36:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3985306 ']' 00:22:09.853 18:36:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3985306 00:22:09.853 18:36:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:22:09.853 18:36:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:09.853 18:36:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3985306 00:22:09.853 18:36:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:09.853 18:36:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:09.853 18:36:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3985306' 00:22:09.853 killing process with pid 3985306 00:22:09.853 18:36:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3985306 00:22:09.853 18:36:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3985306 00:22:10.112 18:36:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:10.112 18:36:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:10.112 18:36:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:10.112 18:36:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:10.112 18:36:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:10.112 18:36:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.112 18:36:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:10.112 18:36:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.015 18:36:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:12.015 00:22:12.015 real 0m12.330s 00:22:12.015 user 0m17.364s 00:22:12.015 sys 0m5.138s 00:22:12.015 18:36:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:12.015 18:36:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.015 ************************************ 00:22:12.015 END TEST nvmf_multicontroller 00:22:12.015 ************************************ 00:22:12.015 18:36:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:12.015 18:36:57 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:12.015 18:36:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:12.015 18:36:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:12.015 18:36:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:12.279 ************************************ 00:22:12.279 START TEST nvmf_aer 00:22:12.279 ************************************ 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:12.279 * Looking for test storage... 00:22:12.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:22:12.279 18:36:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:18.849 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:18.849 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:22:18.849 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:18.849 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:18.849 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:18.849 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:18.849 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:18.849 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:22:18.849 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:18.849 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:22:18.849 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:22:18.849 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:22:18.849 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:22:18.849 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:22:18.849 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:22:18.849 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:18.849 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:18.849 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:18.849 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:18.849 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:18.849 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:18.849 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:18.849 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:18.849 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:18.849 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:18.849 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:18.849 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:18.850 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:18.850 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:18.850 Found net devices under 0000:86:00.0: cvl_0_0 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:18.850 Found net devices under 0000:86:00.1: cvl_0_1 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:18.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:18.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:22:18.850 00:22:18.850 --- 10.0.0.2 ping statistics --- 00:22:18.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.850 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:18.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:18.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:22:18.850 00:22:18.850 --- 10.0.0.1 ping statistics --- 00:22:18.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.850 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3989368 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3989368 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 3989368 ']' 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:18.850 18:37:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:18.850 [2024-07-15 18:37:03.496189] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:22:18.850 [2024-07-15 18:37:03.496233] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:18.850 EAL: No free 2048 kB hugepages reported on node 1 00:22:18.850 [2024-07-15 18:37:03.564753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:18.850 [2024-07-15 18:37:03.643662] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:18.850 [2024-07-15 18:37:03.643697] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:18.850 [2024-07-15 18:37:03.643704] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:18.850 [2024-07-15 18:37:03.643710] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:18.850 [2024-07-15 18:37:03.643715] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:18.850 [2024-07-15 18:37:03.643768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.850 [2024-07-15 18:37:03.643876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:18.850 [2024-07-15 18:37:03.643981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.850 [2024-07-15 18:37:03.643983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:18.850 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:18.850 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:22:18.850 18:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:18.850 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:18.850 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:18.850 18:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.850 18:37:04 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:18.850 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.850 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:18.850 [2024-07-15 18:37:04.344357] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.850 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.850 18:37:04 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:18.850 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.850 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:18.850 Malloc0 00:22:18.850 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.850 18:37:04 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:18.850 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.851 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:18.851 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.851 18:37:04 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:18.851 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.851 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:18.851 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.851 18:37:04 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:18.851 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.851 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:18.851 [2024-07-15 18:37:04.395940] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.851 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.851 18:37:04 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:18.851 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.851 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:19.110 [ 00:22:19.110 { 00:22:19.110 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:19.110 "subtype": "Discovery", 00:22:19.110 "listen_addresses": [], 00:22:19.110 "allow_any_host": true, 00:22:19.110 "hosts": [] 00:22:19.110 }, 00:22:19.110 { 00:22:19.110 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.110 "subtype": "NVMe", 00:22:19.110 "listen_addresses": [ 00:22:19.110 { 00:22:19.110 "trtype": "TCP", 00:22:19.110 "adrfam": "IPv4", 00:22:19.110 "traddr": "10.0.0.2", 00:22:19.110 "trsvcid": "4420" 00:22:19.110 } 00:22:19.110 ], 00:22:19.110 "allow_any_host": true, 00:22:19.110 "hosts": [], 00:22:19.110 "serial_number": "SPDK00000000000001", 00:22:19.110 "model_number": "SPDK bdev Controller", 00:22:19.110 "max_namespaces": 2, 00:22:19.110 "min_cntlid": 1, 00:22:19.110 "max_cntlid": 65519, 00:22:19.110 "namespaces": [ 00:22:19.110 { 00:22:19.110 "nsid": 1, 00:22:19.110 "bdev_name": "Malloc0", 00:22:19.110 "name": "Malloc0", 00:22:19.110 "nguid": "B41FF1BEC3484AEABB81C0BAA78E56BA", 00:22:19.110 "uuid": "b41ff1be-c348-4aea-bb81-c0baa78e56ba" 00:22:19.110 } 00:22:19.110 ] 00:22:19.110 } 00:22:19.110 ] 00:22:19.110 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.110 18:37:04 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:19.110 18:37:04 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:19.110 18:37:04 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=3989590 00:22:19.110 18:37:04 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:19.110 18:37:04 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:19.110 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:22:19.110 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:19.110 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:22:19.110 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:22:19.110 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:19.110 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.110 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:19.110 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:22:19.110 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:22:19.110 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:19.110 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:19.110 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:19.110 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:22:19.110 18:37:04 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:19.110 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.110 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:19.110 Malloc1 00:22:19.110 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.110 18:37:04 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:19.110 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.110 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:19.369 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.369 18:37:04 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:19.369 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.369 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:19.369 Asynchronous Event Request test 00:22:19.369 Attaching to 10.0.0.2 00:22:19.369 Attached to 10.0.0.2 00:22:19.369 Registering asynchronous event callbacks... 00:22:19.369 Starting namespace attribute notice tests for all controllers... 00:22:19.369 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:19.369 aer_cb - Changed Namespace 00:22:19.369 Cleaning up... 00:22:19.369 [ 00:22:19.369 { 00:22:19.369 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:19.369 "subtype": "Discovery", 00:22:19.369 "listen_addresses": [], 00:22:19.369 "allow_any_host": true, 00:22:19.369 "hosts": [] 00:22:19.369 }, 00:22:19.369 { 00:22:19.369 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.369 "subtype": "NVMe", 00:22:19.369 "listen_addresses": [ 00:22:19.369 { 00:22:19.369 "trtype": "TCP", 00:22:19.369 "adrfam": "IPv4", 00:22:19.369 "traddr": "10.0.0.2", 00:22:19.369 "trsvcid": "4420" 00:22:19.369 } 00:22:19.369 ], 00:22:19.369 "allow_any_host": true, 00:22:19.369 "hosts": [], 00:22:19.370 "serial_number": "SPDK00000000000001", 00:22:19.370 "model_number": "SPDK bdev Controller", 00:22:19.370 "max_namespaces": 2, 00:22:19.370 "min_cntlid": 1, 00:22:19.370 "max_cntlid": 65519, 00:22:19.370 "namespaces": [ 00:22:19.370 { 00:22:19.370 "nsid": 1, 00:22:19.370 "bdev_name": "Malloc0", 00:22:19.370 "name": "Malloc0", 00:22:19.370 "nguid": "B41FF1BEC3484AEABB81C0BAA78E56BA", 00:22:19.370 "uuid": "b41ff1be-c348-4aea-bb81-c0baa78e56ba" 00:22:19.370 }, 00:22:19.370 { 00:22:19.370 "nsid": 2, 00:22:19.370 "bdev_name": "Malloc1", 00:22:19.370 "name": "Malloc1", 00:22:19.370 "nguid": "CEE58492C6664663BE53471EA9339C35", 00:22:19.370 "uuid": "cee58492-c666-4663-be53-471ea9339c35" 00:22:19.370 } 00:22:19.370 ] 00:22:19.370 } 00:22:19.370 ] 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 3989590 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:19.370 rmmod nvme_tcp 00:22:19.370 rmmod nvme_fabrics 00:22:19.370 rmmod nvme_keyring 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3989368 ']' 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3989368 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 3989368 ']' 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 3989368 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3989368 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3989368' 00:22:19.370 killing process with pid 3989368 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 3989368 00:22:19.370 18:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 3989368 00:22:19.629 18:37:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:19.629 18:37:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:19.629 18:37:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:19.629 18:37:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:19.629 18:37:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:19.629 18:37:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.629 18:37:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:19.629 18:37:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.171 18:37:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:22.171 00:22:22.171 real 0m9.511s 00:22:22.171 user 0m7.200s 00:22:22.171 sys 0m4.758s 00:22:22.171 18:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:22.171 18:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:22.171 ************************************ 00:22:22.171 END TEST nvmf_aer 00:22:22.171 ************************************ 00:22:22.171 18:37:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:22.171 18:37:07 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:22.171 18:37:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:22.171 18:37:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:22.171 18:37:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:22.171 ************************************ 00:22:22.171 START TEST nvmf_async_init 00:22:22.171 ************************************ 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:22.172 * Looking for test storage... 00:22:22.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=5a4648af7caf49178f4688919bcc2a25 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:22:22.172 18:37:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:27.433 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:27.433 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:27.433 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:27.434 Found net devices under 0000:86:00.0: cvl_0_0 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:27.434 Found net devices under 0000:86:00.1: cvl_0_1 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:27.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:27.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:22:27.434 00:22:27.434 --- 10.0.0.2 ping statistics --- 00:22:27.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.434 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:27.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:27.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:22:27.434 00:22:27.434 --- 10.0.0.1 ping statistics --- 00:22:27.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.434 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:27.434 18:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:27.692 18:37:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:27.692 18:37:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:27.692 18:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:27.692 18:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:27.692 18:37:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3993109 00:22:27.692 18:37:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3993109 00:22:27.692 18:37:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:27.692 18:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 3993109 ']' 00:22:27.692 18:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.692 18:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:27.692 18:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.692 18:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:27.692 18:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:27.692 [2024-07-15 18:37:13.070545] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:22:27.692 [2024-07-15 18:37:13.070593] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.692 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.692 [2024-07-15 18:37:13.141824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.692 [2024-07-15 18:37:13.211879] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.692 [2024-07-15 18:37:13.211918] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.692 [2024-07-15 18:37:13.211925] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.692 [2024-07-15 18:37:13.211930] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.692 [2024-07-15 18:37:13.211935] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.692 [2024-07-15 18:37:13.211972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.631 18:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:28.631 18:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:22:28.631 18:37:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:28.631 18:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:28.632 18:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.632 18:37:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.632 18:37:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:28.632 18:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.632 18:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.632 [2024-07-15 18:37:13.918611] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.632 18:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.632 18:37:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:28.632 18:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.632 18:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.632 null0 00:22:28.632 18:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.632 18:37:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:28.632 18:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.632 18:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.632 18:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.632 18:37:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:28.632 18:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.632 18:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.632 18:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.632 18:37:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 5a4648af7caf49178f4688919bcc2a25 00:22:28.632 18:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.632 18:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.632 18:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.632 18:37:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:28.632 18:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.632 18:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.632 [2024-07-15 18:37:13.962848] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.632 18:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.632 18:37:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:28.632 18:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.632 18:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.890 nvme0n1 00:22:28.890 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.890 18:37:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:28.890 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.890 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.890 [ 00:22:28.890 { 00:22:28.890 "name": "nvme0n1", 00:22:28.890 "aliases": [ 00:22:28.890 "5a4648af-7caf-4917-8f46-88919bcc2a25" 00:22:28.890 ], 00:22:28.890 "product_name": "NVMe disk", 00:22:28.890 "block_size": 512, 00:22:28.890 "num_blocks": 2097152, 00:22:28.890 "uuid": "5a4648af-7caf-4917-8f46-88919bcc2a25", 00:22:28.890 "assigned_rate_limits": { 00:22:28.890 "rw_ios_per_sec": 0, 00:22:28.890 "rw_mbytes_per_sec": 0, 00:22:28.890 "r_mbytes_per_sec": 0, 00:22:28.890 "w_mbytes_per_sec": 0 00:22:28.890 }, 00:22:28.890 "claimed": false, 00:22:28.890 "zoned": false, 00:22:28.890 "supported_io_types": { 00:22:28.890 "read": true, 00:22:28.890 "write": true, 00:22:28.890 "unmap": false, 00:22:28.890 "flush": true, 00:22:28.890 "reset": true, 00:22:28.890 "nvme_admin": true, 00:22:28.890 "nvme_io": true, 00:22:28.890 "nvme_io_md": false, 00:22:28.890 "write_zeroes": true, 00:22:28.890 "zcopy": false, 00:22:28.890 "get_zone_info": false, 00:22:28.890 "zone_management": false, 00:22:28.891 "zone_append": false, 00:22:28.891 "compare": true, 00:22:28.891 "compare_and_write": true, 00:22:28.891 "abort": true, 00:22:28.891 "seek_hole": false, 00:22:28.891 "seek_data": false, 00:22:28.891 "copy": true, 00:22:28.891 "nvme_iov_md": false 00:22:28.891 }, 00:22:28.891 "memory_domains": [ 00:22:28.891 { 00:22:28.891 "dma_device_id": "system", 00:22:28.891 "dma_device_type": 1 00:22:28.891 } 00:22:28.891 ], 00:22:28.891 "driver_specific": { 00:22:28.891 "nvme": [ 00:22:28.891 { 00:22:28.891 "trid": { 00:22:28.891 "trtype": "TCP", 00:22:28.891 "adrfam": "IPv4", 00:22:28.891 "traddr": "10.0.0.2", 00:22:28.891 "trsvcid": "4420", 00:22:28.891 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:28.891 }, 00:22:28.891 "ctrlr_data": { 00:22:28.891 "cntlid": 1, 00:22:28.891 "vendor_id": "0x8086", 00:22:28.891 "model_number": "SPDK bdev Controller", 00:22:28.891 "serial_number": "00000000000000000000", 00:22:28.891 "firmware_revision": "24.09", 00:22:28.891 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:28.891 "oacs": { 00:22:28.891 "security": 0, 00:22:28.891 "format": 0, 00:22:28.891 "firmware": 0, 00:22:28.891 "ns_manage": 0 00:22:28.891 }, 00:22:28.891 "multi_ctrlr": true, 00:22:28.891 "ana_reporting": false 00:22:28.891 }, 00:22:28.891 "vs": { 00:22:28.891 "nvme_version": "1.3" 00:22:28.891 }, 00:22:28.891 "ns_data": { 00:22:28.891 "id": 1, 00:22:28.891 "can_share": true 00:22:28.891 } 00:22:28.891 } 00:22:28.891 ], 00:22:28.891 "mp_policy": "active_passive" 00:22:28.891 } 00:22:28.891 } 00:22:28.891 ] 00:22:28.891 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.891 18:37:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:28.891 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.891 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.891 [2024-07-15 18:37:14.227382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:28.891 [2024-07-15 18:37:14.227440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25ee250 (9): Bad file descriptor 00:22:28.891 [2024-07-15 18:37:14.359419] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:28.891 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.891 18:37:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:28.891 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.891 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.891 [ 00:22:28.891 { 00:22:28.891 "name": "nvme0n1", 00:22:28.891 "aliases": [ 00:22:28.891 "5a4648af-7caf-4917-8f46-88919bcc2a25" 00:22:28.891 ], 00:22:28.891 "product_name": "NVMe disk", 00:22:28.891 "block_size": 512, 00:22:28.891 "num_blocks": 2097152, 00:22:28.891 "uuid": "5a4648af-7caf-4917-8f46-88919bcc2a25", 00:22:28.891 "assigned_rate_limits": { 00:22:28.891 "rw_ios_per_sec": 0, 00:22:28.891 "rw_mbytes_per_sec": 0, 00:22:28.891 "r_mbytes_per_sec": 0, 00:22:28.891 "w_mbytes_per_sec": 0 00:22:28.891 }, 00:22:28.891 "claimed": false, 00:22:28.891 "zoned": false, 00:22:28.891 "supported_io_types": { 00:22:28.891 "read": true, 00:22:28.891 "write": true, 00:22:28.891 "unmap": false, 00:22:28.891 "flush": true, 00:22:28.891 "reset": true, 00:22:28.891 "nvme_admin": true, 00:22:28.891 "nvme_io": true, 00:22:28.891 "nvme_io_md": false, 00:22:28.891 "write_zeroes": true, 00:22:28.891 "zcopy": false, 00:22:28.891 "get_zone_info": false, 00:22:28.891 "zone_management": false, 00:22:28.891 "zone_append": false, 00:22:28.891 "compare": true, 00:22:28.891 "compare_and_write": true, 00:22:28.891 "abort": true, 00:22:28.891 "seek_hole": false, 00:22:28.891 "seek_data": false, 00:22:28.891 "copy": true, 00:22:28.891 "nvme_iov_md": false 00:22:28.891 }, 00:22:28.891 "memory_domains": [ 00:22:28.891 { 00:22:28.891 "dma_device_id": "system", 00:22:28.891 "dma_device_type": 1 00:22:28.891 } 00:22:28.891 ], 00:22:28.891 "driver_specific": { 00:22:28.891 "nvme": [ 00:22:28.891 { 00:22:28.891 "trid": { 00:22:28.891 "trtype": "TCP", 00:22:28.891 "adrfam": "IPv4", 00:22:28.891 "traddr": "10.0.0.2", 00:22:28.891 "trsvcid": "4420", 00:22:28.891 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:28.891 }, 00:22:28.891 "ctrlr_data": { 00:22:28.891 "cntlid": 2, 00:22:28.891 "vendor_id": "0x8086", 00:22:28.891 "model_number": "SPDK bdev Controller", 00:22:28.891 "serial_number": "00000000000000000000", 00:22:28.891 "firmware_revision": "24.09", 00:22:28.891 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:28.891 "oacs": { 00:22:28.891 "security": 0, 00:22:28.891 "format": 0, 00:22:28.891 "firmware": 0, 00:22:28.891 "ns_manage": 0 00:22:28.891 }, 00:22:28.891 "multi_ctrlr": true, 00:22:28.891 "ana_reporting": false 00:22:28.891 }, 00:22:28.891 "vs": { 00:22:28.891 "nvme_version": "1.3" 00:22:28.891 }, 00:22:28.891 "ns_data": { 00:22:28.891 "id": 1, 00:22:28.891 "can_share": true 00:22:28.891 } 00:22:28.891 } 00:22:28.891 ], 00:22:28.891 "mp_policy": "active_passive" 00:22:28.891 } 00:22:28.891 } 00:22:28.891 ] 00:22:28.891 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.891 18:37:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:28.891 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.891 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.891 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.891 18:37:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:28.891 18:37:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.VoCteaUxIS 00:22:28.891 18:37:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:28.891 18:37:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.VoCteaUxIS 00:22:28.891 18:37:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:28.891 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.891 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.891 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.891 18:37:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:28.891 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.891 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.891 [2024-07-15 18:37:14.419971] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:28.891 [2024-07-15 18:37:14.420095] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:28.891 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.891 18:37:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.VoCteaUxIS 00:22:28.891 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.891 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.891 [2024-07-15 18:37:14.427986] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:28.891 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.891 18:37:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.VoCteaUxIS 00:22:28.891 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.891 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.891 [2024-07-15 18:37:14.436021] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:28.891 [2024-07-15 18:37:14.436057] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:29.151 nvme0n1 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:29.151 [ 00:22:29.151 { 00:22:29.151 "name": "nvme0n1", 00:22:29.151 "aliases": [ 00:22:29.151 "5a4648af-7caf-4917-8f46-88919bcc2a25" 00:22:29.151 ], 00:22:29.151 "product_name": "NVMe disk", 00:22:29.151 "block_size": 512, 00:22:29.151 "num_blocks": 2097152, 00:22:29.151 "uuid": "5a4648af-7caf-4917-8f46-88919bcc2a25", 00:22:29.151 "assigned_rate_limits": { 00:22:29.151 "rw_ios_per_sec": 0, 00:22:29.151 "rw_mbytes_per_sec": 0, 00:22:29.151 "r_mbytes_per_sec": 0, 00:22:29.151 "w_mbytes_per_sec": 0 00:22:29.151 }, 00:22:29.151 "claimed": false, 00:22:29.151 "zoned": false, 00:22:29.151 "supported_io_types": { 00:22:29.151 "read": true, 00:22:29.151 "write": true, 00:22:29.151 "unmap": false, 00:22:29.151 "flush": true, 00:22:29.151 "reset": true, 00:22:29.151 "nvme_admin": true, 00:22:29.151 "nvme_io": true, 00:22:29.151 "nvme_io_md": false, 00:22:29.151 "write_zeroes": true, 00:22:29.151 "zcopy": false, 00:22:29.151 "get_zone_info": false, 00:22:29.151 "zone_management": false, 00:22:29.151 "zone_append": false, 00:22:29.151 "compare": true, 00:22:29.151 "compare_and_write": true, 00:22:29.151 "abort": true, 00:22:29.151 "seek_hole": false, 00:22:29.151 "seek_data": false, 00:22:29.151 "copy": true, 00:22:29.151 "nvme_iov_md": false 00:22:29.151 }, 00:22:29.151 "memory_domains": [ 00:22:29.151 { 00:22:29.151 "dma_device_id": "system", 00:22:29.151 "dma_device_type": 1 00:22:29.151 } 00:22:29.151 ], 00:22:29.151 "driver_specific": { 00:22:29.151 "nvme": [ 00:22:29.151 { 00:22:29.151 "trid": { 00:22:29.151 "trtype": "TCP", 00:22:29.151 "adrfam": "IPv4", 00:22:29.151 "traddr": "10.0.0.2", 00:22:29.151 "trsvcid": "4421", 00:22:29.151 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:29.151 }, 00:22:29.151 "ctrlr_data": { 00:22:29.151 "cntlid": 3, 00:22:29.151 "vendor_id": "0x8086", 00:22:29.151 "model_number": "SPDK bdev Controller", 00:22:29.151 "serial_number": "00000000000000000000", 00:22:29.151 "firmware_revision": "24.09", 00:22:29.151 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:29.151 "oacs": { 00:22:29.151 "security": 0, 00:22:29.151 "format": 0, 00:22:29.151 "firmware": 0, 00:22:29.151 "ns_manage": 0 00:22:29.151 }, 00:22:29.151 "multi_ctrlr": true, 00:22:29.151 "ana_reporting": false 00:22:29.151 }, 00:22:29.151 "vs": { 00:22:29.151 "nvme_version": "1.3" 00:22:29.151 }, 00:22:29.151 "ns_data": { 00:22:29.151 "id": 1, 00:22:29.151 "can_share": true 00:22:29.151 } 00:22:29.151 } 00:22:29.151 ], 00:22:29.151 "mp_policy": "active_passive" 00:22:29.151 } 00:22:29.151 } 00:22:29.151 ] 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.VoCteaUxIS 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:29.151 rmmod nvme_tcp 00:22:29.151 rmmod nvme_fabrics 00:22:29.151 rmmod nvme_keyring 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3993109 ']' 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3993109 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 3993109 ']' 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 3993109 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3993109 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3993109' 00:22:29.151 killing process with pid 3993109 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 3993109 00:22:29.151 [2024-07-15 18:37:14.654459] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:29.151 [2024-07-15 18:37:14.654483] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:29.151 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 3993109 00:22:29.410 18:37:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:29.410 18:37:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:29.410 18:37:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:29.410 18:37:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:29.410 18:37:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:29.410 18:37:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.410 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:29.410 18:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.945 18:37:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:31.945 00:22:31.945 real 0m9.703s 00:22:31.945 user 0m3.564s 00:22:31.945 sys 0m4.677s 00:22:31.945 18:37:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:31.945 18:37:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:31.945 ************************************ 00:22:31.945 END TEST nvmf_async_init 00:22:31.945 ************************************ 00:22:31.945 18:37:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:31.945 18:37:16 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:31.945 18:37:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:31.945 18:37:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:31.945 18:37:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:31.945 ************************************ 00:22:31.945 START TEST dma 00:22:31.945 ************************************ 00:22:31.945 18:37:16 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:31.945 * Looking for test storage... 00:22:31.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:31.945 18:37:17 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:31.945 18:37:17 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:22:31.945 18:37:17 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:31.945 18:37:17 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:31.945 18:37:17 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:31.945 18:37:17 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:31.945 18:37:17 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:31.945 18:37:17 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:31.945 18:37:17 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:31.945 18:37:17 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:31.945 18:37:17 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:31.945 18:37:17 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:31.945 18:37:17 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:31.945 18:37:17 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:31.945 18:37:17 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:31.945 18:37:17 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:31.945 18:37:17 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:31.945 18:37:17 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:31.945 18:37:17 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:31.945 18:37:17 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:31.945 18:37:17 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:31.945 18:37:17 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:31.945 18:37:17 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.945 18:37:17 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.945 18:37:17 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.945 18:37:17 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:22:31.945 18:37:17 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.945 18:37:17 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:22:31.945 18:37:17 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:31.945 18:37:17 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:31.945 18:37:17 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:31.945 18:37:17 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:31.945 18:37:17 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:31.945 18:37:17 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:31.945 18:37:17 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:31.945 18:37:17 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:31.945 18:37:17 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:31.945 18:37:17 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:22:31.945 00:22:31.945 real 0m0.121s 00:22:31.945 user 0m0.063s 00:22:31.945 sys 0m0.065s 00:22:31.945 18:37:17 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:31.945 18:37:17 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:22:31.945 ************************************ 00:22:31.945 END TEST dma 00:22:31.945 ************************************ 00:22:31.945 18:37:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:31.945 18:37:17 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:31.945 18:37:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:31.945 18:37:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:31.945 18:37:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:31.945 ************************************ 00:22:31.945 START TEST nvmf_identify 00:22:31.945 ************************************ 00:22:31.945 18:37:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:31.945 * Looking for test storage... 00:22:31.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:22:31.946 18:37:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:37.223 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:37.223 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:37.223 Found net devices under 0000:86:00.0: cvl_0_0 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:37.223 Found net devices under 0000:86:00.1: cvl_0_1 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:37.223 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:37.483 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:37.483 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:37.483 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:37.483 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:37.483 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:37.483 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:37.483 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:37.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:37.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:22:37.483 00:22:37.483 --- 10.0.0.2 ping statistics --- 00:22:37.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.483 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:22:37.483 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:37.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:37.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:22:37.483 00:22:37.483 --- 10.0.0.1 ping statistics --- 00:22:37.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.483 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:22:37.483 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:37.483 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:22:37.483 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:37.483 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:37.483 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:37.483 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:37.483 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:37.483 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:37.483 18:37:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:37.483 18:37:22 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:37.483 18:37:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:37.483 18:37:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:37.483 18:37:22 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3996917 00:22:37.483 18:37:22 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:37.483 18:37:22 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:37.483 18:37:22 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3996917 00:22:37.483 18:37:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 3996917 ']' 00:22:37.483 18:37:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.483 18:37:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:37.483 18:37:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.483 18:37:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:37.483 18:37:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:37.483 [2024-07-15 18:37:23.026370] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:22:37.483 [2024-07-15 18:37:23.026410] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:37.742 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.742 [2024-07-15 18:37:23.094071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:37.742 [2024-07-15 18:37:23.166954] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:37.742 [2024-07-15 18:37:23.166997] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:37.742 [2024-07-15 18:37:23.167004] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:37.742 [2024-07-15 18:37:23.167009] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:37.742 [2024-07-15 18:37:23.167014] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:37.742 [2024-07-15 18:37:23.167129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.742 [2024-07-15 18:37:23.167347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.742 [2024-07-15 18:37:23.167254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.742 [2024-07-15 18:37:23.167358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:38.310 18:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:38.310 18:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:22:38.310 18:37:23 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:38.310 18:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.310 18:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:38.310 [2024-07-15 18:37:23.843237] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.310 18:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.310 18:37:23 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:38.310 18:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:38.310 18:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:38.609 18:37:23 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:38.609 18:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.609 18:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:38.609 Malloc0 00:22:38.609 18:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.609 18:37:23 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:38.609 18:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.609 18:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:38.609 18:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.609 18:37:23 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:38.609 18:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.609 18:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:38.609 18:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.609 18:37:23 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:38.609 18:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.609 18:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:38.609 [2024-07-15 18:37:23.930851] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.609 18:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.609 18:37:23 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:38.609 18:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.609 18:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:38.609 18:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.609 18:37:23 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:38.609 18:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.609 18:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:38.609 [ 00:22:38.609 { 00:22:38.609 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:38.609 "subtype": "Discovery", 00:22:38.609 "listen_addresses": [ 00:22:38.609 { 00:22:38.609 "trtype": "TCP", 00:22:38.609 "adrfam": "IPv4", 00:22:38.609 "traddr": "10.0.0.2", 00:22:38.609 "trsvcid": "4420" 00:22:38.609 } 00:22:38.609 ], 00:22:38.609 "allow_any_host": true, 00:22:38.609 "hosts": [] 00:22:38.609 }, 00:22:38.609 { 00:22:38.609 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.609 "subtype": "NVMe", 00:22:38.609 "listen_addresses": [ 00:22:38.609 { 00:22:38.609 "trtype": "TCP", 00:22:38.609 "adrfam": "IPv4", 00:22:38.609 "traddr": "10.0.0.2", 00:22:38.609 "trsvcid": "4420" 00:22:38.609 } 00:22:38.609 ], 00:22:38.609 "allow_any_host": true, 00:22:38.609 "hosts": [], 00:22:38.609 "serial_number": "SPDK00000000000001", 00:22:38.609 "model_number": "SPDK bdev Controller", 00:22:38.609 "max_namespaces": 32, 00:22:38.609 "min_cntlid": 1, 00:22:38.609 "max_cntlid": 65519, 00:22:38.609 "namespaces": [ 00:22:38.609 { 00:22:38.609 "nsid": 1, 00:22:38.609 "bdev_name": "Malloc0", 00:22:38.609 "name": "Malloc0", 00:22:38.609 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:38.609 "eui64": "ABCDEF0123456789", 00:22:38.609 "uuid": "e447a8ed-c6da-4b93-904f-2efe403e3955" 00:22:38.609 } 00:22:38.609 ] 00:22:38.609 } 00:22:38.609 ] 00:22:38.609 18:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.609 18:37:23 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:38.609 [2024-07-15 18:37:23.983507] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:22:38.609 [2024-07-15 18:37:23.983551] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3997132 ] 00:22:38.609 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.609 [2024-07-15 18:37:24.013573] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:38.609 [2024-07-15 18:37:24.013614] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:38.609 [2024-07-15 18:37:24.013619] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:38.609 [2024-07-15 18:37:24.013629] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:38.610 [2024-07-15 18:37:24.013634] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:38.610 [2024-07-15 18:37:24.013830] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:38.610 [2024-07-15 18:37:24.013857] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2419ec0 0 00:22:38.610 [2024-07-15 18:37:24.020352] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:38.610 [2024-07-15 18:37:24.020362] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:38.610 [2024-07-15 18:37:24.020366] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:38.610 [2024-07-15 18:37:24.020369] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:38.610 [2024-07-15 18:37:24.020400] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.610 [2024-07-15 18:37:24.020406] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.610 [2024-07-15 18:37:24.020409] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2419ec0) 00:22:38.610 [2024-07-15 18:37:24.020420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:38.610 [2024-07-15 18:37:24.020433] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249ce40, cid 0, qid 0 00:22:38.610 [2024-07-15 18:37:24.027345] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.610 [2024-07-15 18:37:24.027353] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.610 [2024-07-15 18:37:24.027357] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.610 [2024-07-15 18:37:24.027361] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249ce40) on tqpair=0x2419ec0 00:22:38.610 [2024-07-15 18:37:24.027369] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:38.610 [2024-07-15 18:37:24.027375] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:38.610 [2024-07-15 18:37:24.027379] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:38.610 [2024-07-15 18:37:24.027391] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.610 [2024-07-15 18:37:24.027395] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.610 [2024-07-15 18:37:24.027398] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2419ec0) 00:22:38.610 [2024-07-15 18:37:24.027404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.610 [2024-07-15 18:37:24.027416] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249ce40, cid 0, qid 0 00:22:38.610 [2024-07-15 18:37:24.027575] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.610 [2024-07-15 18:37:24.027581] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.610 [2024-07-15 18:37:24.027584] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.610 [2024-07-15 18:37:24.027588] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249ce40) on tqpair=0x2419ec0 00:22:38.610 [2024-07-15 18:37:24.027592] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:38.610 [2024-07-15 18:37:24.027598] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:38.610 [2024-07-15 18:37:24.027604] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.610 [2024-07-15 18:37:24.027607] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.610 [2024-07-15 18:37:24.027610] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2419ec0) 00:22:38.610 [2024-07-15 18:37:24.027616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.610 [2024-07-15 18:37:24.027625] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249ce40, cid 0, qid 0 00:22:38.610 [2024-07-15 18:37:24.027690] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.610 [2024-07-15 18:37:24.027695] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.610 [2024-07-15 18:37:24.027698] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.610 [2024-07-15 18:37:24.027701] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249ce40) on tqpair=0x2419ec0 00:22:38.610 [2024-07-15 18:37:24.027706] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:38.610 [2024-07-15 18:37:24.027712] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:38.610 [2024-07-15 18:37:24.027718] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.610 [2024-07-15 18:37:24.027721] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.610 [2024-07-15 18:37:24.027724] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2419ec0) 00:22:38.610 [2024-07-15 18:37:24.027730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.610 [2024-07-15 18:37:24.027739] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249ce40, cid 0, qid 0 00:22:38.610 [2024-07-15 18:37:24.027801] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.610 [2024-07-15 18:37:24.027807] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.610 [2024-07-15 18:37:24.027810] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.610 [2024-07-15 18:37:24.027813] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249ce40) on tqpair=0x2419ec0 00:22:38.610 [2024-07-15 18:37:24.027817] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:38.610 [2024-07-15 18:37:24.027825] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.610 [2024-07-15 18:37:24.027828] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.610 [2024-07-15 18:37:24.027831] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2419ec0) 00:22:38.610 [2024-07-15 18:37:24.027837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.610 [2024-07-15 18:37:24.027846] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249ce40, cid 0, qid 0 00:22:38.610 [2024-07-15 18:37:24.027912] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.610 [2024-07-15 18:37:24.027917] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.610 [2024-07-15 18:37:24.027920] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.610 [2024-07-15 18:37:24.027923] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249ce40) on tqpair=0x2419ec0 00:22:38.610 [2024-07-15 18:37:24.027927] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:38.610 [2024-07-15 18:37:24.027931] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:38.610 [2024-07-15 18:37:24.027937] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:38.610 [2024-07-15 18:37:24.028042] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:38.610 [2024-07-15 18:37:24.028046] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:38.610 [2024-07-15 18:37:24.028054] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.610 [2024-07-15 18:37:24.028057] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.610 [2024-07-15 18:37:24.028060] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2419ec0) 00:22:38.610 [2024-07-15 18:37:24.028065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.610 [2024-07-15 18:37:24.028075] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249ce40, cid 0, qid 0 00:22:38.610 [2024-07-15 18:37:24.028135] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.610 [2024-07-15 18:37:24.028141] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.610 [2024-07-15 18:37:24.028144] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.610 [2024-07-15 18:37:24.028147] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249ce40) on tqpair=0x2419ec0 00:22:38.610 [2024-07-15 18:37:24.028151] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:38.610 [2024-07-15 18:37:24.028158] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.610 [2024-07-15 18:37:24.028162] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.610 [2024-07-15 18:37:24.028165] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2419ec0) 00:22:38.610 [2024-07-15 18:37:24.028170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.610 [2024-07-15 18:37:24.028181] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249ce40, cid 0, qid 0 00:22:38.610 [2024-07-15 18:37:24.028247] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.610 [2024-07-15 18:37:24.028253] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.610 [2024-07-15 18:37:24.028255] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.610 [2024-07-15 18:37:24.028258] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249ce40) on tqpair=0x2419ec0 00:22:38.610 [2024-07-15 18:37:24.028262] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:38.610 [2024-07-15 18:37:24.028267] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:38.610 [2024-07-15 18:37:24.028273] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:38.610 [2024-07-15 18:37:24.028279] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:38.610 [2024-07-15 18:37:24.028287] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.610 [2024-07-15 18:37:24.028290] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2419ec0) 00:22:38.610 [2024-07-15 18:37:24.028296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.610 [2024-07-15 18:37:24.028305] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249ce40, cid 0, qid 0 00:22:38.610 [2024-07-15 18:37:24.028394] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.610 [2024-07-15 18:37:24.028400] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.610 [2024-07-15 18:37:24.028404] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.610 [2024-07-15 18:37:24.028407] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2419ec0): datao=0, datal=4096, cccid=0 00:22:38.610 [2024-07-15 18:37:24.028411] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x249ce40) on tqpair(0x2419ec0): expected_datao=0, payload_size=4096 00:22:38.610 [2024-07-15 18:37:24.028415] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.028421] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.028424] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.028435] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.611 [2024-07-15 18:37:24.028440] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.611 [2024-07-15 18:37:24.028443] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.028446] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249ce40) on tqpair=0x2419ec0 00:22:38.611 [2024-07-15 18:37:24.028452] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:38.611 [2024-07-15 18:37:24.028458] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:38.611 [2024-07-15 18:37:24.028463] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:38.611 [2024-07-15 18:37:24.028467] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:38.611 [2024-07-15 18:37:24.028471] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:38.611 [2024-07-15 18:37:24.028474] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:38.611 [2024-07-15 18:37:24.028481] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:38.611 [2024-07-15 18:37:24.028488] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.028492] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.028495] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2419ec0) 00:22:38.611 [2024-07-15 18:37:24.028500] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:38.611 [2024-07-15 18:37:24.028510] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249ce40, cid 0, qid 0 00:22:38.611 [2024-07-15 18:37:24.028574] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.611 [2024-07-15 18:37:24.028580] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.611 [2024-07-15 18:37:24.028582] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.028586] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249ce40) on tqpair=0x2419ec0 00:22:38.611 [2024-07-15 18:37:24.028592] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.028596] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.028598] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2419ec0) 00:22:38.611 [2024-07-15 18:37:24.028603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.611 [2024-07-15 18:37:24.028608] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.028611] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.028614] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2419ec0) 00:22:38.611 [2024-07-15 18:37:24.028619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.611 [2024-07-15 18:37:24.028624] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.028627] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.028630] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2419ec0) 00:22:38.611 [2024-07-15 18:37:24.028635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.611 [2024-07-15 18:37:24.028640] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.028643] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.028645] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419ec0) 00:22:38.611 [2024-07-15 18:37:24.028650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.611 [2024-07-15 18:37:24.028654] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:38.611 [2024-07-15 18:37:24.028663] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:38.611 [2024-07-15 18:37:24.028669] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.028672] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2419ec0) 00:22:38.611 [2024-07-15 18:37:24.028678] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.611 [2024-07-15 18:37:24.028688] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249ce40, cid 0, qid 0 00:22:38.611 [2024-07-15 18:37:24.028692] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249cfc0, cid 1, qid 0 00:22:38.611 [2024-07-15 18:37:24.028696] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249d140, cid 2, qid 0 00:22:38.611 [2024-07-15 18:37:24.028702] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249d2c0, cid 3, qid 0 00:22:38.611 [2024-07-15 18:37:24.028706] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249d440, cid 4, qid 0 00:22:38.611 [2024-07-15 18:37:24.028795] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.611 [2024-07-15 18:37:24.028801] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.611 [2024-07-15 18:37:24.028804] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.028807] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249d440) on tqpair=0x2419ec0 00:22:38.611 [2024-07-15 18:37:24.028811] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:38.611 [2024-07-15 18:37:24.028815] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:38.611 [2024-07-15 18:37:24.028824] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.028827] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2419ec0) 00:22:38.611 [2024-07-15 18:37:24.028833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.611 [2024-07-15 18:37:24.028841] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249d440, cid 4, qid 0 00:22:38.611 [2024-07-15 18:37:24.028914] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.611 [2024-07-15 18:37:24.028920] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.611 [2024-07-15 18:37:24.028923] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.028926] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2419ec0): datao=0, datal=4096, cccid=4 00:22:38.611 [2024-07-15 18:37:24.028930] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x249d440) on tqpair(0x2419ec0): expected_datao=0, payload_size=4096 00:22:38.611 [2024-07-15 18:37:24.028933] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.028943] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.028946] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.071348] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.611 [2024-07-15 18:37:24.071359] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.611 [2024-07-15 18:37:24.071363] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.071366] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249d440) on tqpair=0x2419ec0 00:22:38.611 [2024-07-15 18:37:24.071378] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:38.611 [2024-07-15 18:37:24.071398] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.071402] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2419ec0) 00:22:38.611 [2024-07-15 18:37:24.071409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.611 [2024-07-15 18:37:24.071414] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.071418] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.071421] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2419ec0) 00:22:38.611 [2024-07-15 18:37:24.071426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.611 [2024-07-15 18:37:24.071440] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249d440, cid 4, qid 0 00:22:38.611 [2024-07-15 18:37:24.071445] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249d5c0, cid 5, qid 0 00:22:38.611 [2024-07-15 18:37:24.071628] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.611 [2024-07-15 18:37:24.071634] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.611 [2024-07-15 18:37:24.071637] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.071641] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2419ec0): datao=0, datal=1024, cccid=4 00:22:38.611 [2024-07-15 18:37:24.071644] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x249d440) on tqpair(0x2419ec0): expected_datao=0, payload_size=1024 00:22:38.611 [2024-07-15 18:37:24.071648] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.071654] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.071657] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.071662] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.611 [2024-07-15 18:37:24.071666] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.611 [2024-07-15 18:37:24.071669] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.071673] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249d5c0) on tqpair=0x2419ec0 00:22:38.611 [2024-07-15 18:37:24.112505] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.611 [2024-07-15 18:37:24.112513] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.611 [2024-07-15 18:37:24.112517] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.112520] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249d440) on tqpair=0x2419ec0 00:22:38.611 [2024-07-15 18:37:24.112535] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.611 [2024-07-15 18:37:24.112539] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2419ec0) 00:22:38.611 [2024-07-15 18:37:24.112545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.611 [2024-07-15 18:37:24.112561] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249d440, cid 4, qid 0 00:22:38.611 [2024-07-15 18:37:24.112635] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.611 [2024-07-15 18:37:24.112640] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.611 [2024-07-15 18:37:24.112643] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.612 [2024-07-15 18:37:24.112646] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2419ec0): datao=0, datal=3072, cccid=4 00:22:38.612 [2024-07-15 18:37:24.112650] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x249d440) on tqpair(0x2419ec0): expected_datao=0, payload_size=3072 00:22:38.612 [2024-07-15 18:37:24.112654] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.612 [2024-07-15 18:37:24.112660] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.612 [2024-07-15 18:37:24.112663] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.612 [2024-07-15 18:37:24.112682] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.612 [2024-07-15 18:37:24.112687] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.612 [2024-07-15 18:37:24.112690] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.612 [2024-07-15 18:37:24.112693] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249d440) on tqpair=0x2419ec0 00:22:38.612 [2024-07-15 18:37:24.112701] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.612 [2024-07-15 18:37:24.112704] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2419ec0) 00:22:38.612 [2024-07-15 18:37:24.112710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.612 [2024-07-15 18:37:24.112724] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249d440, cid 4, qid 0 00:22:38.612 [2024-07-15 18:37:24.112795] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.612 [2024-07-15 18:37:24.112802] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.612 [2024-07-15 18:37:24.112805] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.612 [2024-07-15 18:37:24.112809] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2419ec0): datao=0, datal=8, cccid=4 00:22:38.612 [2024-07-15 18:37:24.112812] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x249d440) on tqpair(0x2419ec0): expected_datao=0, payload_size=8 00:22:38.612 [2024-07-15 18:37:24.112816] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.612 [2024-07-15 18:37:24.112821] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.612 [2024-07-15 18:37:24.112824] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.884 [2024-07-15 18:37:24.153497] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.884 [2024-07-15 18:37:24.153506] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.884 [2024-07-15 18:37:24.153510] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.884 [2024-07-15 18:37:24.153513] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249d440) on tqpair=0x2419ec0 00:22:38.884 ===================================================== 00:22:38.884 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:38.884 ===================================================== 00:22:38.884 Controller Capabilities/Features 00:22:38.884 ================================ 00:22:38.884 Vendor ID: 0000 00:22:38.884 Subsystem Vendor ID: 0000 00:22:38.884 Serial Number: .................... 00:22:38.884 Model Number: ........................................ 00:22:38.884 Firmware Version: 24.09 00:22:38.884 Recommended Arb Burst: 0 00:22:38.884 IEEE OUI Identifier: 00 00 00 00:22:38.884 Multi-path I/O 00:22:38.884 May have multiple subsystem ports: No 00:22:38.884 May have multiple controllers: No 00:22:38.884 Associated with SR-IOV VF: No 00:22:38.884 Max Data Transfer Size: 131072 00:22:38.884 Max Number of Namespaces: 0 00:22:38.884 Max Number of I/O Queues: 1024 00:22:38.884 NVMe Specification Version (VS): 1.3 00:22:38.884 NVMe Specification Version (Identify): 1.3 00:22:38.884 Maximum Queue Entries: 128 00:22:38.884 Contiguous Queues Required: Yes 00:22:38.884 Arbitration Mechanisms Supported 00:22:38.884 Weighted Round Robin: Not Supported 00:22:38.884 Vendor Specific: Not Supported 00:22:38.884 Reset Timeout: 15000 ms 00:22:38.884 Doorbell Stride: 4 bytes 00:22:38.884 NVM Subsystem Reset: Not Supported 00:22:38.884 Command Sets Supported 00:22:38.884 NVM Command Set: Supported 00:22:38.884 Boot Partition: Not Supported 00:22:38.884 Memory Page Size Minimum: 4096 bytes 00:22:38.884 Memory Page Size Maximum: 4096 bytes 00:22:38.884 Persistent Memory Region: Not Supported 00:22:38.884 Optional Asynchronous Events Supported 00:22:38.884 Namespace Attribute Notices: Not Supported 00:22:38.884 Firmware Activation Notices: Not Supported 00:22:38.884 ANA Change Notices: Not Supported 00:22:38.884 PLE Aggregate Log Change Notices: Not Supported 00:22:38.884 LBA Status Info Alert Notices: Not Supported 00:22:38.884 EGE Aggregate Log Change Notices: Not Supported 00:22:38.884 Normal NVM Subsystem Shutdown event: Not Supported 00:22:38.884 Zone Descriptor Change Notices: Not Supported 00:22:38.884 Discovery Log Change Notices: Supported 00:22:38.884 Controller Attributes 00:22:38.884 128-bit Host Identifier: Not Supported 00:22:38.884 Non-Operational Permissive Mode: Not Supported 00:22:38.884 NVM Sets: Not Supported 00:22:38.884 Read Recovery Levels: Not Supported 00:22:38.884 Endurance Groups: Not Supported 00:22:38.884 Predictable Latency Mode: Not Supported 00:22:38.884 Traffic Based Keep ALive: Not Supported 00:22:38.884 Namespace Granularity: Not Supported 00:22:38.884 SQ Associations: Not Supported 00:22:38.884 UUID List: Not Supported 00:22:38.884 Multi-Domain Subsystem: Not Supported 00:22:38.884 Fixed Capacity Management: Not Supported 00:22:38.884 Variable Capacity Management: Not Supported 00:22:38.884 Delete Endurance Group: Not Supported 00:22:38.884 Delete NVM Set: Not Supported 00:22:38.884 Extended LBA Formats Supported: Not Supported 00:22:38.884 Flexible Data Placement Supported: Not Supported 00:22:38.884 00:22:38.884 Controller Memory Buffer Support 00:22:38.884 ================================ 00:22:38.884 Supported: No 00:22:38.884 00:22:38.884 Persistent Memory Region Support 00:22:38.884 ================================ 00:22:38.884 Supported: No 00:22:38.884 00:22:38.884 Admin Command Set Attributes 00:22:38.884 ============================ 00:22:38.884 Security Send/Receive: Not Supported 00:22:38.884 Format NVM: Not Supported 00:22:38.884 Firmware Activate/Download: Not Supported 00:22:38.884 Namespace Management: Not Supported 00:22:38.884 Device Self-Test: Not Supported 00:22:38.884 Directives: Not Supported 00:22:38.884 NVMe-MI: Not Supported 00:22:38.884 Virtualization Management: Not Supported 00:22:38.884 Doorbell Buffer Config: Not Supported 00:22:38.884 Get LBA Status Capability: Not Supported 00:22:38.884 Command & Feature Lockdown Capability: Not Supported 00:22:38.884 Abort Command Limit: 1 00:22:38.884 Async Event Request Limit: 4 00:22:38.884 Number of Firmware Slots: N/A 00:22:38.884 Firmware Slot 1 Read-Only: N/A 00:22:38.884 Firmware Activation Without Reset: N/A 00:22:38.884 Multiple Update Detection Support: N/A 00:22:38.884 Firmware Update Granularity: No Information Provided 00:22:38.884 Per-Namespace SMART Log: No 00:22:38.884 Asymmetric Namespace Access Log Page: Not Supported 00:22:38.885 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:38.885 Command Effects Log Page: Not Supported 00:22:38.885 Get Log Page Extended Data: Supported 00:22:38.885 Telemetry Log Pages: Not Supported 00:22:38.885 Persistent Event Log Pages: Not Supported 00:22:38.885 Supported Log Pages Log Page: May Support 00:22:38.885 Commands Supported & Effects Log Page: Not Supported 00:22:38.885 Feature Identifiers & Effects Log Page:May Support 00:22:38.885 NVMe-MI Commands & Effects Log Page: May Support 00:22:38.885 Data Area 4 for Telemetry Log: Not Supported 00:22:38.885 Error Log Page Entries Supported: 128 00:22:38.885 Keep Alive: Not Supported 00:22:38.885 00:22:38.885 NVM Command Set Attributes 00:22:38.885 ========================== 00:22:38.885 Submission Queue Entry Size 00:22:38.885 Max: 1 00:22:38.885 Min: 1 00:22:38.885 Completion Queue Entry Size 00:22:38.885 Max: 1 00:22:38.885 Min: 1 00:22:38.885 Number of Namespaces: 0 00:22:38.885 Compare Command: Not Supported 00:22:38.885 Write Uncorrectable Command: Not Supported 00:22:38.885 Dataset Management Command: Not Supported 00:22:38.885 Write Zeroes Command: Not Supported 00:22:38.885 Set Features Save Field: Not Supported 00:22:38.885 Reservations: Not Supported 00:22:38.885 Timestamp: Not Supported 00:22:38.885 Copy: Not Supported 00:22:38.885 Volatile Write Cache: Not Present 00:22:38.885 Atomic Write Unit (Normal): 1 00:22:38.885 Atomic Write Unit (PFail): 1 00:22:38.885 Atomic Compare & Write Unit: 1 00:22:38.885 Fused Compare & Write: Supported 00:22:38.885 Scatter-Gather List 00:22:38.885 SGL Command Set: Supported 00:22:38.885 SGL Keyed: Supported 00:22:38.885 SGL Bit Bucket Descriptor: Not Supported 00:22:38.885 SGL Metadata Pointer: Not Supported 00:22:38.885 Oversized SGL: Not Supported 00:22:38.885 SGL Metadata Address: Not Supported 00:22:38.885 SGL Offset: Supported 00:22:38.885 Transport SGL Data Block: Not Supported 00:22:38.885 Replay Protected Memory Block: Not Supported 00:22:38.885 00:22:38.885 Firmware Slot Information 00:22:38.885 ========================= 00:22:38.885 Active slot: 0 00:22:38.885 00:22:38.885 00:22:38.885 Error Log 00:22:38.885 ========= 00:22:38.885 00:22:38.885 Active Namespaces 00:22:38.885 ================= 00:22:38.885 Discovery Log Page 00:22:38.885 ================== 00:22:38.885 Generation Counter: 2 00:22:38.885 Number of Records: 2 00:22:38.885 Record Format: 0 00:22:38.885 00:22:38.885 Discovery Log Entry 0 00:22:38.885 ---------------------- 00:22:38.885 Transport Type: 3 (TCP) 00:22:38.885 Address Family: 1 (IPv4) 00:22:38.885 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:38.885 Entry Flags: 00:22:38.885 Duplicate Returned Information: 1 00:22:38.885 Explicit Persistent Connection Support for Discovery: 1 00:22:38.885 Transport Requirements: 00:22:38.885 Secure Channel: Not Required 00:22:38.885 Port ID: 0 (0x0000) 00:22:38.885 Controller ID: 65535 (0xffff) 00:22:38.885 Admin Max SQ Size: 128 00:22:38.885 Transport Service Identifier: 4420 00:22:38.885 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:38.885 Transport Address: 10.0.0.2 00:22:38.885 Discovery Log Entry 1 00:22:38.885 ---------------------- 00:22:38.885 Transport Type: 3 (TCP) 00:22:38.885 Address Family: 1 (IPv4) 00:22:38.885 Subsystem Type: 2 (NVM Subsystem) 00:22:38.885 Entry Flags: 00:22:38.885 Duplicate Returned Information: 0 00:22:38.885 Explicit Persistent Connection Support for Discovery: 0 00:22:38.885 Transport Requirements: 00:22:38.885 Secure Channel: Not Required 00:22:38.885 Port ID: 0 (0x0000) 00:22:38.885 Controller ID: 65535 (0xffff) 00:22:38.885 Admin Max SQ Size: 128 00:22:38.885 Transport Service Identifier: 4420 00:22:38.885 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:38.885 Transport Address: 10.0.0.2 [2024-07-15 18:37:24.153588] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:38.885 [2024-07-15 18:37:24.153598] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249ce40) on tqpair=0x2419ec0 00:22:38.885 [2024-07-15 18:37:24.153603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.885 [2024-07-15 18:37:24.153607] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249cfc0) on tqpair=0x2419ec0 00:22:38.885 [2024-07-15 18:37:24.153611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.885 [2024-07-15 18:37:24.153615] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249d140) on tqpair=0x2419ec0 00:22:38.885 [2024-07-15 18:37:24.153619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.885 [2024-07-15 18:37:24.153623] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249d2c0) on tqpair=0x2419ec0 00:22:38.885 [2024-07-15 18:37:24.153627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.885 [2024-07-15 18:37:24.153636] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.885 [2024-07-15 18:37:24.153640] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.885 [2024-07-15 18:37:24.153643] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419ec0) 00:22:38.885 [2024-07-15 18:37:24.153649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.885 [2024-07-15 18:37:24.153662] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249d2c0, cid 3, qid 0 00:22:38.885 [2024-07-15 18:37:24.153723] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.885 [2024-07-15 18:37:24.153729] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.885 [2024-07-15 18:37:24.153731] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.885 [2024-07-15 18:37:24.153735] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249d2c0) on tqpair=0x2419ec0 00:22:38.885 [2024-07-15 18:37:24.153741] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.885 [2024-07-15 18:37:24.153744] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.886 [2024-07-15 18:37:24.153747] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419ec0) 00:22:38.886 [2024-07-15 18:37:24.153753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.886 [2024-07-15 18:37:24.153764] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249d2c0, cid 3, qid 0 00:22:38.886 [2024-07-15 18:37:24.153838] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.886 [2024-07-15 18:37:24.153845] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.886 [2024-07-15 18:37:24.153848] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.886 [2024-07-15 18:37:24.153851] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249d2c0) on tqpair=0x2419ec0 00:22:38.886 [2024-07-15 18:37:24.153855] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:38.886 [2024-07-15 18:37:24.153859] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:38.886 [2024-07-15 18:37:24.153867] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.886 [2024-07-15 18:37:24.153870] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.886 [2024-07-15 18:37:24.153873] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419ec0) 00:22:38.886 [2024-07-15 18:37:24.153879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.886 [2024-07-15 18:37:24.153887] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249d2c0, cid 3, qid 0 00:22:38.886 [2024-07-15 18:37:24.153955] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.886 [2024-07-15 18:37:24.153960] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.886 [2024-07-15 18:37:24.153963] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.886 [2024-07-15 18:37:24.153967] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249d2c0) on tqpair=0x2419ec0 00:22:38.886 [2024-07-15 18:37:24.153975] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.886 [2024-07-15 18:37:24.153978] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.886 [2024-07-15 18:37:24.153981] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419ec0) 00:22:38.886 [2024-07-15 18:37:24.153987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.886 [2024-07-15 18:37:24.153995] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249d2c0, cid 3, qid 0 00:22:38.886 [2024-07-15 18:37:24.154054] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.886 [2024-07-15 18:37:24.154060] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.886 [2024-07-15 18:37:24.154063] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.886 [2024-07-15 18:37:24.154066] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249d2c0) on tqpair=0x2419ec0 00:22:38.886 [2024-07-15 18:37:24.154074] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.886 [2024-07-15 18:37:24.154077] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.886 [2024-07-15 18:37:24.154080] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419ec0) 00:22:38.886 [2024-07-15 18:37:24.154086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.886 [2024-07-15 18:37:24.154095] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249d2c0, cid 3, qid 0 00:22:38.886 [2024-07-15 18:37:24.154152] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.886 [2024-07-15 18:37:24.154158] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.886 [2024-07-15 18:37:24.154161] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.886 [2024-07-15 18:37:24.154164] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249d2c0) on tqpair=0x2419ec0 00:22:38.886 [2024-07-15 18:37:24.154171] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.886 [2024-07-15 18:37:24.154175] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.886 [2024-07-15 18:37:24.154178] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419ec0) 00:22:38.886 [2024-07-15 18:37:24.154183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.886 [2024-07-15 18:37:24.154194] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249d2c0, cid 3, qid 0 00:22:38.886 [2024-07-15 18:37:24.154271] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.886 [2024-07-15 18:37:24.154277] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.886 [2024-07-15 18:37:24.154280] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.886 [2024-07-15 18:37:24.154283] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249d2c0) on tqpair=0x2419ec0 00:22:38.886 [2024-07-15 18:37:24.154291] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.886 [2024-07-15 18:37:24.154294] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.886 [2024-07-15 18:37:24.154297] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419ec0) 00:22:38.886 [2024-07-15 18:37:24.154303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.886 [2024-07-15 18:37:24.154311] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249d2c0, cid 3, qid 0 00:22:38.886 [2024-07-15 18:37:24.158345] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.886 [2024-07-15 18:37:24.158352] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.886 [2024-07-15 18:37:24.158355] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.886 [2024-07-15 18:37:24.158359] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249d2c0) on tqpair=0x2419ec0 00:22:38.886 [2024-07-15 18:37:24.158368] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.886 [2024-07-15 18:37:24.158371] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.886 [2024-07-15 18:37:24.158374] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419ec0) 00:22:38.886 [2024-07-15 18:37:24.158380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.886 [2024-07-15 18:37:24.158391] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249d2c0, cid 3, qid 0 00:22:38.886 [2024-07-15 18:37:24.158545] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.886 [2024-07-15 18:37:24.158551] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.886 [2024-07-15 18:37:24.158554] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.886 [2024-07-15 18:37:24.158557] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249d2c0) on tqpair=0x2419ec0 00:22:38.886 [2024-07-15 18:37:24.158563] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:22:38.886 00:22:38.886 18:37:24 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:38.886 [2024-07-15 18:37:24.196624] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:22:38.886 [2024-07-15 18:37:24.196663] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3997170 ] 00:22:38.886 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.886 [2024-07-15 18:37:24.224342] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:38.886 [2024-07-15 18:37:24.224383] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:38.886 [2024-07-15 18:37:24.224388] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:38.886 [2024-07-15 18:37:24.224397] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:38.886 [2024-07-15 18:37:24.224405] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:38.887 [2024-07-15 18:37:24.224692] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:38.887 [2024-07-15 18:37:24.224717] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xb77ec0 0 00:22:38.887 [2024-07-15 18:37:24.239345] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:38.887 [2024-07-15 18:37:24.239356] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:38.887 [2024-07-15 18:37:24.239359] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:38.887 [2024-07-15 18:37:24.239362] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:38.887 [2024-07-15 18:37:24.239390] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.887 [2024-07-15 18:37:24.239395] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.887 [2024-07-15 18:37:24.239398] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb77ec0) 00:22:38.887 [2024-07-15 18:37:24.239408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:38.887 [2024-07-15 18:37:24.239422] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfae40, cid 0, qid 0 00:22:38.887 [2024-07-15 18:37:24.247349] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.887 [2024-07-15 18:37:24.247357] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.887 [2024-07-15 18:37:24.247360] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.887 [2024-07-15 18:37:24.247364] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfae40) on tqpair=0xb77ec0 00:22:38.887 [2024-07-15 18:37:24.247374] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:38.887 [2024-07-15 18:37:24.247379] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:38.887 [2024-07-15 18:37:24.247384] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:38.887 [2024-07-15 18:37:24.247394] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.887 [2024-07-15 18:37:24.247397] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.887 [2024-07-15 18:37:24.247400] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb77ec0) 00:22:38.887 [2024-07-15 18:37:24.247407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.887 [2024-07-15 18:37:24.247420] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfae40, cid 0, qid 0 00:22:38.887 [2024-07-15 18:37:24.247573] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.887 [2024-07-15 18:37:24.247578] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.887 [2024-07-15 18:37:24.247581] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.887 [2024-07-15 18:37:24.247585] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfae40) on tqpair=0xb77ec0 00:22:38.887 [2024-07-15 18:37:24.247589] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:38.887 [2024-07-15 18:37:24.247595] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:38.887 [2024-07-15 18:37:24.247601] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.887 [2024-07-15 18:37:24.247604] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.887 [2024-07-15 18:37:24.247607] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb77ec0) 00:22:38.887 [2024-07-15 18:37:24.247612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.887 [2024-07-15 18:37:24.247622] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfae40, cid 0, qid 0 00:22:38.887 [2024-07-15 18:37:24.247686] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.887 [2024-07-15 18:37:24.247692] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.887 [2024-07-15 18:37:24.247695] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.887 [2024-07-15 18:37:24.247698] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfae40) on tqpair=0xb77ec0 00:22:38.887 [2024-07-15 18:37:24.247702] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:38.887 [2024-07-15 18:37:24.247708] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:38.887 [2024-07-15 18:37:24.247714] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.887 [2024-07-15 18:37:24.247717] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.887 [2024-07-15 18:37:24.247720] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb77ec0) 00:22:38.887 [2024-07-15 18:37:24.247726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.887 [2024-07-15 18:37:24.247735] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfae40, cid 0, qid 0 00:22:38.887 [2024-07-15 18:37:24.247797] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.887 [2024-07-15 18:37:24.247803] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.887 [2024-07-15 18:37:24.247806] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.887 [2024-07-15 18:37:24.247809] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfae40) on tqpair=0xb77ec0 00:22:38.887 [2024-07-15 18:37:24.247813] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:38.887 [2024-07-15 18:37:24.247821] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.887 [2024-07-15 18:37:24.247824] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.887 [2024-07-15 18:37:24.247827] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb77ec0) 00:22:38.887 [2024-07-15 18:37:24.247833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.887 [2024-07-15 18:37:24.247842] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfae40, cid 0, qid 0 00:22:38.887 [2024-07-15 18:37:24.247901] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.887 [2024-07-15 18:37:24.247906] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.887 [2024-07-15 18:37:24.247909] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.887 [2024-07-15 18:37:24.247912] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfae40) on tqpair=0xb77ec0 00:22:38.887 [2024-07-15 18:37:24.247916] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:38.887 [2024-07-15 18:37:24.247920] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:38.887 [2024-07-15 18:37:24.247927] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:38.887 [2024-07-15 18:37:24.248031] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:38.887 [2024-07-15 18:37:24.248035] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:38.887 [2024-07-15 18:37:24.248041] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.887 [2024-07-15 18:37:24.248044] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.887 [2024-07-15 18:37:24.248047] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb77ec0) 00:22:38.887 [2024-07-15 18:37:24.248053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.887 [2024-07-15 18:37:24.248064] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfae40, cid 0, qid 0 00:22:38.887 [2024-07-15 18:37:24.248124] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.887 [2024-07-15 18:37:24.248129] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.887 [2024-07-15 18:37:24.248132] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.887 [2024-07-15 18:37:24.248135] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfae40) on tqpair=0xb77ec0 00:22:38.888 [2024-07-15 18:37:24.248139] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:38.888 [2024-07-15 18:37:24.248147] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.888 [2024-07-15 18:37:24.248150] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.888 [2024-07-15 18:37:24.248153] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb77ec0) 00:22:38.888 [2024-07-15 18:37:24.248158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.888 [2024-07-15 18:37:24.248167] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfae40, cid 0, qid 0 00:22:38.888 [2024-07-15 18:37:24.248235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.888 [2024-07-15 18:37:24.248240] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.888 [2024-07-15 18:37:24.248243] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.888 [2024-07-15 18:37:24.248246] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfae40) on tqpair=0xb77ec0 00:22:38.888 [2024-07-15 18:37:24.248250] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:38.888 [2024-07-15 18:37:24.248254] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:38.888 [2024-07-15 18:37:24.248260] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:38.888 [2024-07-15 18:37:24.248267] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:38.888 [2024-07-15 18:37:24.248274] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.888 [2024-07-15 18:37:24.248277] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb77ec0) 00:22:38.888 [2024-07-15 18:37:24.248283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.888 [2024-07-15 18:37:24.248292] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfae40, cid 0, qid 0 00:22:38.888 [2024-07-15 18:37:24.248403] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.888 [2024-07-15 18:37:24.248409] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.888 [2024-07-15 18:37:24.248412] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.888 [2024-07-15 18:37:24.248415] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb77ec0): datao=0, datal=4096, cccid=0 00:22:38.888 [2024-07-15 18:37:24.248419] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbfae40) on tqpair(0xb77ec0): expected_datao=0, payload_size=4096 00:22:38.888 [2024-07-15 18:37:24.248423] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.888 [2024-07-15 18:37:24.248429] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.888 [2024-07-15 18:37:24.248432] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.888 [2024-07-15 18:37:24.248447] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.888 [2024-07-15 18:37:24.248452] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.888 [2024-07-15 18:37:24.248455] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.888 [2024-07-15 18:37:24.248460] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfae40) on tqpair=0xb77ec0 00:22:38.888 [2024-07-15 18:37:24.248466] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:38.888 [2024-07-15 18:37:24.248472] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:38.888 [2024-07-15 18:37:24.248476] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:38.888 [2024-07-15 18:37:24.248479] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:38.888 [2024-07-15 18:37:24.248483] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:38.888 [2024-07-15 18:37:24.248486] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:38.888 [2024-07-15 18:37:24.248494] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:38.888 [2024-07-15 18:37:24.248500] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.888 [2024-07-15 18:37:24.248504] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.888 [2024-07-15 18:37:24.248507] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb77ec0) 00:22:38.888 [2024-07-15 18:37:24.248513] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:38.888 [2024-07-15 18:37:24.248524] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfae40, cid 0, qid 0 00:22:38.888 [2024-07-15 18:37:24.248589] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.888 [2024-07-15 18:37:24.248595] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.888 [2024-07-15 18:37:24.248597] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.888 [2024-07-15 18:37:24.248601] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfae40) on tqpair=0xb77ec0 00:22:38.888 [2024-07-15 18:37:24.248607] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.888 [2024-07-15 18:37:24.248610] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.888 [2024-07-15 18:37:24.248613] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb77ec0) 00:22:38.888 [2024-07-15 18:37:24.248618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.888 [2024-07-15 18:37:24.248623] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.888 [2024-07-15 18:37:24.248626] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.888 [2024-07-15 18:37:24.248629] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xb77ec0) 00:22:38.888 [2024-07-15 18:37:24.248633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.888 [2024-07-15 18:37:24.248638] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.888 [2024-07-15 18:37:24.248641] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.888 [2024-07-15 18:37:24.248644] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xb77ec0) 00:22:38.888 [2024-07-15 18:37:24.248649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.888 [2024-07-15 18:37:24.248654] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.888 [2024-07-15 18:37:24.248657] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.888 [2024-07-15 18:37:24.248660] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb77ec0) 00:22:38.888 [2024-07-15 18:37:24.248664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.888 [2024-07-15 18:37:24.248668] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:38.888 [2024-07-15 18:37:24.248679] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:38.888 [2024-07-15 18:37:24.248685] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.888 [2024-07-15 18:37:24.248688] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb77ec0) 00:22:38.888 [2024-07-15 18:37:24.248693] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.888 [2024-07-15 18:37:24.248704] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfae40, cid 0, qid 0 00:22:38.888 [2024-07-15 18:37:24.248708] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfafc0, cid 1, qid 0 00:22:38.888 [2024-07-15 18:37:24.248712] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfb140, cid 2, qid 0 00:22:38.888 [2024-07-15 18:37:24.248716] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfb2c0, cid 3, qid 0 00:22:38.888 [2024-07-15 18:37:24.248719] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfb440, cid 4, qid 0 00:22:38.888 [2024-07-15 18:37:24.248815] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.889 [2024-07-15 18:37:24.248820] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.889 [2024-07-15 18:37:24.248823] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.889 [2024-07-15 18:37:24.248826] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfb440) on tqpair=0xb77ec0 00:22:38.889 [2024-07-15 18:37:24.248830] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:38.889 [2024-07-15 18:37:24.248834] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:38.889 [2024-07-15 18:37:24.248841] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:38.889 [2024-07-15 18:37:24.248846] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:38.889 [2024-07-15 18:37:24.248851] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.889 [2024-07-15 18:37:24.248855] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.889 [2024-07-15 18:37:24.248858] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb77ec0) 00:22:38.889 [2024-07-15 18:37:24.248863] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:38.889 [2024-07-15 18:37:24.248872] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfb440, cid 4, qid 0 00:22:38.889 [2024-07-15 18:37:24.248932] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.889 [2024-07-15 18:37:24.248937] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.889 [2024-07-15 18:37:24.248940] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.889 [2024-07-15 18:37:24.248943] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfb440) on tqpair=0xb77ec0 00:22:38.889 [2024-07-15 18:37:24.248991] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:38.889 [2024-07-15 18:37:24.248999] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:38.889 [2024-07-15 18:37:24.249006] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.889 [2024-07-15 18:37:24.249009] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb77ec0) 00:22:38.889 [2024-07-15 18:37:24.249014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.889 [2024-07-15 18:37:24.249025] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfb440, cid 4, qid 0 00:22:38.889 [2024-07-15 18:37:24.249101] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.889 [2024-07-15 18:37:24.249107] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.889 [2024-07-15 18:37:24.249110] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.889 [2024-07-15 18:37:24.249113] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb77ec0): datao=0, datal=4096, cccid=4 00:22:38.889 [2024-07-15 18:37:24.249116] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbfb440) on tqpair(0xb77ec0): expected_datao=0, payload_size=4096 00:22:38.889 [2024-07-15 18:37:24.249120] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.889 [2024-07-15 18:37:24.249126] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.889 [2024-07-15 18:37:24.249129] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.889 [2024-07-15 18:37:24.249138] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.889 [2024-07-15 18:37:24.249143] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.889 [2024-07-15 18:37:24.249146] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.889 [2024-07-15 18:37:24.249149] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfb440) on tqpair=0xb77ec0 00:22:38.889 [2024-07-15 18:37:24.249157] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:38.889 [2024-07-15 18:37:24.249167] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:38.889 [2024-07-15 18:37:24.249175] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:38.889 [2024-07-15 18:37:24.249181] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.889 [2024-07-15 18:37:24.249184] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb77ec0) 00:22:38.889 [2024-07-15 18:37:24.249189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.889 [2024-07-15 18:37:24.249199] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfb440, cid 4, qid 0 00:22:38.889 [2024-07-15 18:37:24.249276] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.889 [2024-07-15 18:37:24.249281] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.889 [2024-07-15 18:37:24.249284] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.889 [2024-07-15 18:37:24.249287] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb77ec0): datao=0, datal=4096, cccid=4 00:22:38.889 [2024-07-15 18:37:24.249291] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbfb440) on tqpair(0xb77ec0): expected_datao=0, payload_size=4096 00:22:38.889 [2024-07-15 18:37:24.249294] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.889 [2024-07-15 18:37:24.249300] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.889 [2024-07-15 18:37:24.249303] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.889 [2024-07-15 18:37:24.249317] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.889 [2024-07-15 18:37:24.249322] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.889 [2024-07-15 18:37:24.249325] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.889 [2024-07-15 18:37:24.249328] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfb440) on tqpair=0xb77ec0 00:22:38.889 [2024-07-15 18:37:24.249347] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:38.889 [2024-07-15 18:37:24.249357] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:38.889 [2024-07-15 18:37:24.249363] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.889 [2024-07-15 18:37:24.249370] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb77ec0) 00:22:38.889 [2024-07-15 18:37:24.249376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.889 [2024-07-15 18:37:24.249386] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfb440, cid 4, qid 0 00:22:38.889 [2024-07-15 18:37:24.249460] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.889 [2024-07-15 18:37:24.249466] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.889 [2024-07-15 18:37:24.249469] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.889 [2024-07-15 18:37:24.249472] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb77ec0): datao=0, datal=4096, cccid=4 00:22:38.889 [2024-07-15 18:37:24.249475] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbfb440) on tqpair(0xb77ec0): expected_datao=0, payload_size=4096 00:22:38.889 [2024-07-15 18:37:24.249479] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.889 [2024-07-15 18:37:24.249484] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.889 [2024-07-15 18:37:24.249487] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.889 [2024-07-15 18:37:24.249500] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.889 [2024-07-15 18:37:24.249505] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.889 [2024-07-15 18:37:24.249508] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.889 [2024-07-15 18:37:24.249511] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfb440) on tqpair=0xb77ec0 00:22:38.889 [2024-07-15 18:37:24.249517] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:38.889 [2024-07-15 18:37:24.249523] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:38.890 [2024-07-15 18:37:24.249530] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:38.890 [2024-07-15 18:37:24.249535] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:38.890 [2024-07-15 18:37:24.249539] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:38.890 [2024-07-15 18:37:24.249544] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:38.890 [2024-07-15 18:37:24.249548] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:38.890 [2024-07-15 18:37:24.249552] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:38.890 [2024-07-15 18:37:24.249556] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:38.890 [2024-07-15 18:37:24.249568] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.890 [2024-07-15 18:37:24.249571] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb77ec0) 00:22:38.890 [2024-07-15 18:37:24.249577] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.890 [2024-07-15 18:37:24.249582] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.890 [2024-07-15 18:37:24.249586] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.890 [2024-07-15 18:37:24.249589] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb77ec0) 00:22:38.890 [2024-07-15 18:37:24.249594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.890 [2024-07-15 18:37:24.249615] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfb440, cid 4, qid 0 00:22:38.890 [2024-07-15 18:37:24.249619] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfb5c0, cid 5, qid 0 00:22:38.890 [2024-07-15 18:37:24.249692] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.890 [2024-07-15 18:37:24.249698] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.890 [2024-07-15 18:37:24.249701] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.890 [2024-07-15 18:37:24.249704] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfb440) on tqpair=0xb77ec0 00:22:38.890 [2024-07-15 18:37:24.249709] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.890 [2024-07-15 18:37:24.249714] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.890 [2024-07-15 18:37:24.249717] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.890 [2024-07-15 18:37:24.249720] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfb5c0) on tqpair=0xb77ec0 00:22:38.890 [2024-07-15 18:37:24.249728] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.890 [2024-07-15 18:37:24.249731] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb77ec0) 00:22:38.890 [2024-07-15 18:37:24.249737] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.890 [2024-07-15 18:37:24.249745] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfb5c0, cid 5, qid 0 00:22:38.890 [2024-07-15 18:37:24.249809] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.890 [2024-07-15 18:37:24.249815] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.890 [2024-07-15 18:37:24.249817] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.890 [2024-07-15 18:37:24.249820] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfb5c0) on tqpair=0xb77ec0 00:22:38.890 [2024-07-15 18:37:24.249828] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.890 [2024-07-15 18:37:24.249832] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb77ec0) 00:22:38.890 [2024-07-15 18:37:24.249837] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.890 [2024-07-15 18:37:24.249846] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfb5c0, cid 5, qid 0 00:22:38.890 [2024-07-15 18:37:24.249918] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.890 [2024-07-15 18:37:24.249924] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.890 [2024-07-15 18:37:24.249927] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.890 [2024-07-15 18:37:24.249930] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfb5c0) on tqpair=0xb77ec0 00:22:38.890 [2024-07-15 18:37:24.249938] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.890 [2024-07-15 18:37:24.249941] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb77ec0) 00:22:38.890 [2024-07-15 18:37:24.249946] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.890 [2024-07-15 18:37:24.249955] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfb5c0, cid 5, qid 0 00:22:38.890 [2024-07-15 18:37:24.250015] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.890 [2024-07-15 18:37:24.250021] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.890 [2024-07-15 18:37:24.250023] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.890 [2024-07-15 18:37:24.250027] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfb5c0) on tqpair=0xb77ec0 00:22:38.890 [2024-07-15 18:37:24.250038] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.890 [2024-07-15 18:37:24.250042] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb77ec0) 00:22:38.890 [2024-07-15 18:37:24.250048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.890 [2024-07-15 18:37:24.250055] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.890 [2024-07-15 18:37:24.250058] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb77ec0) 00:22:38.890 [2024-07-15 18:37:24.250063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.890 [2024-07-15 18:37:24.250069] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.891 [2024-07-15 18:37:24.250072] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xb77ec0) 00:22:38.891 [2024-07-15 18:37:24.250077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.891 [2024-07-15 18:37:24.250083] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.891 [2024-07-15 18:37:24.250087] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb77ec0) 00:22:38.891 [2024-07-15 18:37:24.250091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.891 [2024-07-15 18:37:24.250102] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfb5c0, cid 5, qid 0 00:22:38.891 [2024-07-15 18:37:24.250106] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfb440, cid 4, qid 0 00:22:38.891 [2024-07-15 18:37:24.250110] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfb740, cid 6, qid 0 00:22:38.891 [2024-07-15 18:37:24.250114] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfb8c0, cid 7, qid 0 00:22:38.891 [2024-07-15 18:37:24.250251] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.891 [2024-07-15 18:37:24.250257] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.891 [2024-07-15 18:37:24.250260] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.891 [2024-07-15 18:37:24.250263] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb77ec0): datao=0, datal=8192, cccid=5 00:22:38.891 [2024-07-15 18:37:24.250266] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbfb5c0) on tqpair(0xb77ec0): expected_datao=0, payload_size=8192 00:22:38.891 [2024-07-15 18:37:24.250270] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.891 [2024-07-15 18:37:24.250288] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.891 [2024-07-15 18:37:24.250291] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.891 [2024-07-15 18:37:24.250296] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.891 [2024-07-15 18:37:24.250301] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.891 [2024-07-15 18:37:24.250304] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.891 [2024-07-15 18:37:24.250306] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb77ec0): datao=0, datal=512, cccid=4 00:22:38.891 [2024-07-15 18:37:24.250310] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbfb440) on tqpair(0xb77ec0): expected_datao=0, payload_size=512 00:22:38.891 [2024-07-15 18:37:24.250314] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.891 [2024-07-15 18:37:24.250319] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.891 [2024-07-15 18:37:24.250322] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.891 [2024-07-15 18:37:24.250326] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.891 [2024-07-15 18:37:24.250331] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.891 [2024-07-15 18:37:24.250334] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.891 [2024-07-15 18:37:24.250343] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb77ec0): datao=0, datal=512, cccid=6 00:22:38.891 [2024-07-15 18:37:24.250347] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbfb740) on tqpair(0xb77ec0): expected_datao=0, payload_size=512 00:22:38.891 [2024-07-15 18:37:24.250352] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.891 [2024-07-15 18:37:24.250357] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.891 [2024-07-15 18:37:24.250360] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.891 [2024-07-15 18:37:24.250365] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:38.891 [2024-07-15 18:37:24.250369] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:38.891 [2024-07-15 18:37:24.250372] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:38.891 [2024-07-15 18:37:24.250375] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb77ec0): datao=0, datal=4096, cccid=7 00:22:38.891 [2024-07-15 18:37:24.250378] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbfb8c0) on tqpair(0xb77ec0): expected_datao=0, payload_size=4096 00:22:38.891 [2024-07-15 18:37:24.250382] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.891 [2024-07-15 18:37:24.250387] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:38.891 [2024-07-15 18:37:24.250390] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:38.891 [2024-07-15 18:37:24.250397] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.891 [2024-07-15 18:37:24.250402] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.891 [2024-07-15 18:37:24.250405] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.891 [2024-07-15 18:37:24.250408] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfb5c0) on tqpair=0xb77ec0 00:22:38.891 [2024-07-15 18:37:24.250418] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.891 [2024-07-15 18:37:24.250423] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.891 [2024-07-15 18:37:24.250425] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.891 [2024-07-15 18:37:24.250428] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfb440) on tqpair=0xb77ec0 00:22:38.891 [2024-07-15 18:37:24.250436] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.891 [2024-07-15 18:37:24.250441] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.891 [2024-07-15 18:37:24.250444] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.891 [2024-07-15 18:37:24.250447] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfb740) on tqpair=0xb77ec0 00:22:38.891 [2024-07-15 18:37:24.250453] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.891 [2024-07-15 18:37:24.250458] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.891 [2024-07-15 18:37:24.250461] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.891 [2024-07-15 18:37:24.250464] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfb8c0) on tqpair=0xb77ec0 00:22:38.891 ===================================================== 00:22:38.891 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:38.891 ===================================================== 00:22:38.891 Controller Capabilities/Features 00:22:38.891 ================================ 00:22:38.891 Vendor ID: 8086 00:22:38.891 Subsystem Vendor ID: 8086 00:22:38.891 Serial Number: SPDK00000000000001 00:22:38.891 Model Number: SPDK bdev Controller 00:22:38.891 Firmware Version: 24.09 00:22:38.891 Recommended Arb Burst: 6 00:22:38.891 IEEE OUI Identifier: e4 d2 5c 00:22:38.891 Multi-path I/O 00:22:38.891 May have multiple subsystem ports: Yes 00:22:38.891 May have multiple controllers: Yes 00:22:38.891 Associated with SR-IOV VF: No 00:22:38.891 Max Data Transfer Size: 131072 00:22:38.891 Max Number of Namespaces: 32 00:22:38.891 Max Number of I/O Queues: 127 00:22:38.891 NVMe Specification Version (VS): 1.3 00:22:38.891 NVMe Specification Version (Identify): 1.3 00:22:38.891 Maximum Queue Entries: 128 00:22:38.891 Contiguous Queues Required: Yes 00:22:38.891 Arbitration Mechanisms Supported 00:22:38.891 Weighted Round Robin: Not Supported 00:22:38.891 Vendor Specific: Not Supported 00:22:38.891 Reset Timeout: 15000 ms 00:22:38.891 Doorbell Stride: 4 bytes 00:22:38.891 NVM Subsystem Reset: Not Supported 00:22:38.891 Command Sets Supported 00:22:38.891 NVM Command Set: Supported 00:22:38.892 Boot Partition: Not Supported 00:22:38.892 Memory Page Size Minimum: 4096 bytes 00:22:38.892 Memory Page Size Maximum: 4096 bytes 00:22:38.892 Persistent Memory Region: Not Supported 00:22:38.892 Optional Asynchronous Events Supported 00:22:38.892 Namespace Attribute Notices: Supported 00:22:38.892 Firmware Activation Notices: Not Supported 00:22:38.892 ANA Change Notices: Not Supported 00:22:38.892 PLE Aggregate Log Change Notices: Not Supported 00:22:38.892 LBA Status Info Alert Notices: Not Supported 00:22:38.892 EGE Aggregate Log Change Notices: Not Supported 00:22:38.892 Normal NVM Subsystem Shutdown event: Not Supported 00:22:38.892 Zone Descriptor Change Notices: Not Supported 00:22:38.892 Discovery Log Change Notices: Not Supported 00:22:38.892 Controller Attributes 00:22:38.892 128-bit Host Identifier: Supported 00:22:38.892 Non-Operational Permissive Mode: Not Supported 00:22:38.892 NVM Sets: Not Supported 00:22:38.892 Read Recovery Levels: Not Supported 00:22:38.892 Endurance Groups: Not Supported 00:22:38.892 Predictable Latency Mode: Not Supported 00:22:38.892 Traffic Based Keep ALive: Not Supported 00:22:38.892 Namespace Granularity: Not Supported 00:22:38.892 SQ Associations: Not Supported 00:22:38.892 UUID List: Not Supported 00:22:38.892 Multi-Domain Subsystem: Not Supported 00:22:38.892 Fixed Capacity Management: Not Supported 00:22:38.892 Variable Capacity Management: Not Supported 00:22:38.892 Delete Endurance Group: Not Supported 00:22:38.892 Delete NVM Set: Not Supported 00:22:38.892 Extended LBA Formats Supported: Not Supported 00:22:38.892 Flexible Data Placement Supported: Not Supported 00:22:38.892 00:22:38.892 Controller Memory Buffer Support 00:22:38.892 ================================ 00:22:38.892 Supported: No 00:22:38.892 00:22:38.892 Persistent Memory Region Support 00:22:38.892 ================================ 00:22:38.892 Supported: No 00:22:38.892 00:22:38.892 Admin Command Set Attributes 00:22:38.892 ============================ 00:22:38.892 Security Send/Receive: Not Supported 00:22:38.892 Format NVM: Not Supported 00:22:38.892 Firmware Activate/Download: Not Supported 00:22:38.892 Namespace Management: Not Supported 00:22:38.892 Device Self-Test: Not Supported 00:22:38.892 Directives: Not Supported 00:22:38.892 NVMe-MI: Not Supported 00:22:38.892 Virtualization Management: Not Supported 00:22:38.892 Doorbell Buffer Config: Not Supported 00:22:38.892 Get LBA Status Capability: Not Supported 00:22:38.892 Command & Feature Lockdown Capability: Not Supported 00:22:38.892 Abort Command Limit: 4 00:22:38.892 Async Event Request Limit: 4 00:22:38.892 Number of Firmware Slots: N/A 00:22:38.892 Firmware Slot 1 Read-Only: N/A 00:22:38.892 Firmware Activation Without Reset: N/A 00:22:38.892 Multiple Update Detection Support: N/A 00:22:38.892 Firmware Update Granularity: No Information Provided 00:22:38.892 Per-Namespace SMART Log: No 00:22:38.892 Asymmetric Namespace Access Log Page: Not Supported 00:22:38.892 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:38.892 Command Effects Log Page: Supported 00:22:38.892 Get Log Page Extended Data: Supported 00:22:38.892 Telemetry Log Pages: Not Supported 00:22:38.892 Persistent Event Log Pages: Not Supported 00:22:38.892 Supported Log Pages Log Page: May Support 00:22:38.892 Commands Supported & Effects Log Page: Not Supported 00:22:38.892 Feature Identifiers & Effects Log Page:May Support 00:22:38.892 NVMe-MI Commands & Effects Log Page: May Support 00:22:38.892 Data Area 4 for Telemetry Log: Not Supported 00:22:38.892 Error Log Page Entries Supported: 128 00:22:38.892 Keep Alive: Supported 00:22:38.892 Keep Alive Granularity: 10000 ms 00:22:38.892 00:22:38.892 NVM Command Set Attributes 00:22:38.892 ========================== 00:22:38.892 Submission Queue Entry Size 00:22:38.892 Max: 64 00:22:38.892 Min: 64 00:22:38.892 Completion Queue Entry Size 00:22:38.892 Max: 16 00:22:38.892 Min: 16 00:22:38.892 Number of Namespaces: 32 00:22:38.892 Compare Command: Supported 00:22:38.892 Write Uncorrectable Command: Not Supported 00:22:38.892 Dataset Management Command: Supported 00:22:38.892 Write Zeroes Command: Supported 00:22:38.892 Set Features Save Field: Not Supported 00:22:38.892 Reservations: Supported 00:22:38.892 Timestamp: Not Supported 00:22:38.892 Copy: Supported 00:22:38.892 Volatile Write Cache: Present 00:22:38.892 Atomic Write Unit (Normal): 1 00:22:38.892 Atomic Write Unit (PFail): 1 00:22:38.892 Atomic Compare & Write Unit: 1 00:22:38.892 Fused Compare & Write: Supported 00:22:38.892 Scatter-Gather List 00:22:38.892 SGL Command Set: Supported 00:22:38.892 SGL Keyed: Supported 00:22:38.892 SGL Bit Bucket Descriptor: Not Supported 00:22:38.892 SGL Metadata Pointer: Not Supported 00:22:38.892 Oversized SGL: Not Supported 00:22:38.892 SGL Metadata Address: Not Supported 00:22:38.892 SGL Offset: Supported 00:22:38.892 Transport SGL Data Block: Not Supported 00:22:38.892 Replay Protected Memory Block: Not Supported 00:22:38.892 00:22:38.892 Firmware Slot Information 00:22:38.892 ========================= 00:22:38.892 Active slot: 1 00:22:38.892 Slot 1 Firmware Revision: 24.09 00:22:38.892 00:22:38.892 00:22:38.892 Commands Supported and Effects 00:22:38.892 ============================== 00:22:38.892 Admin Commands 00:22:38.892 -------------- 00:22:38.892 Get Log Page (02h): Supported 00:22:38.892 Identify (06h): Supported 00:22:38.892 Abort (08h): Supported 00:22:38.892 Set Features (09h): Supported 00:22:38.892 Get Features (0Ah): Supported 00:22:38.892 Asynchronous Event Request (0Ch): Supported 00:22:38.892 Keep Alive (18h): Supported 00:22:38.892 I/O Commands 00:22:38.892 ------------ 00:22:38.892 Flush (00h): Supported LBA-Change 00:22:38.892 Write (01h): Supported LBA-Change 00:22:38.892 Read (02h): Supported 00:22:38.892 Compare (05h): Supported 00:22:38.893 Write Zeroes (08h): Supported LBA-Change 00:22:38.893 Dataset Management (09h): Supported LBA-Change 00:22:38.893 Copy (19h): Supported LBA-Change 00:22:38.893 00:22:38.893 Error Log 00:22:38.893 ========= 00:22:38.893 00:22:38.893 Arbitration 00:22:38.893 =========== 00:22:38.893 Arbitration Burst: 1 00:22:38.893 00:22:38.893 Power Management 00:22:38.893 ================ 00:22:38.893 Number of Power States: 1 00:22:38.893 Current Power State: Power State #0 00:22:38.893 Power State #0: 00:22:38.893 Max Power: 0.00 W 00:22:38.893 Non-Operational State: Operational 00:22:38.893 Entry Latency: Not Reported 00:22:38.893 Exit Latency: Not Reported 00:22:38.893 Relative Read Throughput: 0 00:22:38.893 Relative Read Latency: 0 00:22:38.893 Relative Write Throughput: 0 00:22:38.893 Relative Write Latency: 0 00:22:38.893 Idle Power: Not Reported 00:22:38.893 Active Power: Not Reported 00:22:38.893 Non-Operational Permissive Mode: Not Supported 00:22:38.893 00:22:38.893 Health Information 00:22:38.893 ================== 00:22:38.893 Critical Warnings: 00:22:38.893 Available Spare Space: OK 00:22:38.893 Temperature: OK 00:22:38.893 Device Reliability: OK 00:22:38.893 Read Only: No 00:22:38.893 Volatile Memory Backup: OK 00:22:38.893 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:38.893 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:38.893 Available Spare: 0% 00:22:38.893 Available Spare Threshold: 0% 00:22:38.893 Life Percentage Used:[2024-07-15 18:37:24.250545] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.893 [2024-07-15 18:37:24.250549] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb77ec0) 00:22:38.893 [2024-07-15 18:37:24.250555] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.893 [2024-07-15 18:37:24.250566] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfb8c0, cid 7, qid 0 00:22:38.893 [2024-07-15 18:37:24.250639] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.893 [2024-07-15 18:37:24.250644] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.893 [2024-07-15 18:37:24.250647] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.893 [2024-07-15 18:37:24.250650] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfb8c0) on tqpair=0xb77ec0 00:22:38.893 [2024-07-15 18:37:24.250678] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:38.893 [2024-07-15 18:37:24.250685] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfae40) on tqpair=0xb77ec0 00:22:38.893 [2024-07-15 18:37:24.250690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.893 [2024-07-15 18:37:24.250696] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfafc0) on tqpair=0xb77ec0 00:22:38.893 [2024-07-15 18:37:24.250700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.893 [2024-07-15 18:37:24.250704] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfb140) on tqpair=0xb77ec0 00:22:38.893 [2024-07-15 18:37:24.250708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.893 [2024-07-15 18:37:24.250712] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfb2c0) on tqpair=0xb77ec0 00:22:38.893 [2024-07-15 18:37:24.250715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.893 [2024-07-15 18:37:24.250722] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.893 [2024-07-15 18:37:24.250725] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.893 [2024-07-15 18:37:24.250728] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb77ec0) 00:22:38.893 [2024-07-15 18:37:24.250733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.893 [2024-07-15 18:37:24.250745] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfb2c0, cid 3, qid 0 00:22:38.893 [2024-07-15 18:37:24.250807] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.893 [2024-07-15 18:37:24.250812] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.893 [2024-07-15 18:37:24.250815] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.893 [2024-07-15 18:37:24.250818] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfb2c0) on tqpair=0xb77ec0 00:22:38.893 [2024-07-15 18:37:24.250824] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.893 [2024-07-15 18:37:24.250827] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.893 [2024-07-15 18:37:24.250830] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb77ec0) 00:22:38.893 [2024-07-15 18:37:24.250835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.893 [2024-07-15 18:37:24.250846] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfb2c0, cid 3, qid 0 00:22:38.893 [2024-07-15 18:37:24.250921] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.893 [2024-07-15 18:37:24.250927] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.893 [2024-07-15 18:37:24.250930] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.893 [2024-07-15 18:37:24.250933] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfb2c0) on tqpair=0xb77ec0 00:22:38.893 [2024-07-15 18:37:24.250936] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:38.893 [2024-07-15 18:37:24.250940] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:38.893 [2024-07-15 18:37:24.250947] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.893 [2024-07-15 18:37:24.250951] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.893 [2024-07-15 18:37:24.250954] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb77ec0) 00:22:38.893 [2024-07-15 18:37:24.250959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.893 [2024-07-15 18:37:24.250968] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfb2c0, cid 3, qid 0 00:22:38.893 [2024-07-15 18:37:24.251033] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.893 [2024-07-15 18:37:24.251038] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.893 [2024-07-15 18:37:24.251041] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.893 [2024-07-15 18:37:24.251044] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfb2c0) on tqpair=0xb77ec0 00:22:38.893 [2024-07-15 18:37:24.251054] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.893 [2024-07-15 18:37:24.251057] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.893 [2024-07-15 18:37:24.251060] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb77ec0) 00:22:38.893 [2024-07-15 18:37:24.251066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.893 [2024-07-15 18:37:24.251075] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfb2c0, cid 3, qid 0 00:22:38.893 [2024-07-15 18:37:24.251137] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.893 [2024-07-15 18:37:24.251142] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.893 [2024-07-15 18:37:24.251145] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.894 [2024-07-15 18:37:24.251148] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfb2c0) on tqpair=0xb77ec0 00:22:38.894 [2024-07-15 18:37:24.251156] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.894 [2024-07-15 18:37:24.251159] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.894 [2024-07-15 18:37:24.251162] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb77ec0) 00:22:38.894 [2024-07-15 18:37:24.251168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.894 [2024-07-15 18:37:24.251176] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfb2c0, cid 3, qid 0 00:22:38.894 [2024-07-15 18:37:24.251239] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.894 [2024-07-15 18:37:24.251244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.894 [2024-07-15 18:37:24.251247] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.894 [2024-07-15 18:37:24.251250] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfb2c0) on tqpair=0xb77ec0 00:22:38.894 [2024-07-15 18:37:24.251258] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.894 [2024-07-15 18:37:24.251262] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.894 [2024-07-15 18:37:24.251264] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb77ec0) 00:22:38.894 [2024-07-15 18:37:24.251270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.894 [2024-07-15 18:37:24.251278] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfb2c0, cid 3, qid 0 00:22:38.894 [2024-07-15 18:37:24.255342] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.894 [2024-07-15 18:37:24.255350] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.894 [2024-07-15 18:37:24.255353] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.894 [2024-07-15 18:37:24.255356] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfb2c0) on tqpair=0xb77ec0 00:22:38.894 [2024-07-15 18:37:24.255366] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:38.894 [2024-07-15 18:37:24.255369] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:38.894 [2024-07-15 18:37:24.255372] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb77ec0) 00:22:38.894 [2024-07-15 18:37:24.255378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.894 [2024-07-15 18:37:24.255389] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfb2c0, cid 3, qid 0 00:22:38.894 [2024-07-15 18:37:24.255559] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:38.894 [2024-07-15 18:37:24.255565] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:38.894 [2024-07-15 18:37:24.255568] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:38.894 [2024-07-15 18:37:24.255571] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfb2c0) on tqpair=0xb77ec0 00:22:38.894 [2024-07-15 18:37:24.255577] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:22:38.894 0% 00:22:38.894 Data Units Read: 0 00:22:38.894 Data Units Written: 0 00:22:38.894 Host Read Commands: 0 00:22:38.894 Host Write Commands: 0 00:22:38.894 Controller Busy Time: 0 minutes 00:22:38.894 Power Cycles: 0 00:22:38.894 Power On Hours: 0 hours 00:22:38.894 Unsafe Shutdowns: 0 00:22:38.894 Unrecoverable Media Errors: 0 00:22:38.894 Lifetime Error Log Entries: 0 00:22:38.894 Warning Temperature Time: 0 minutes 00:22:38.894 Critical Temperature Time: 0 minutes 00:22:38.894 00:22:38.894 Number of Queues 00:22:38.894 ================ 00:22:38.894 Number of I/O Submission Queues: 127 00:22:38.894 Number of I/O Completion Queues: 127 00:22:38.894 00:22:38.894 Active Namespaces 00:22:38.894 ================= 00:22:38.894 Namespace ID:1 00:22:38.894 Error Recovery Timeout: Unlimited 00:22:38.894 Command Set Identifier: NVM (00h) 00:22:38.894 Deallocate: Supported 00:22:38.894 Deallocated/Unwritten Error: Not Supported 00:22:38.894 Deallocated Read Value: Unknown 00:22:38.894 Deallocate in Write Zeroes: Not Supported 00:22:38.894 Deallocated Guard Field: 0xFFFF 00:22:38.894 Flush: Supported 00:22:38.894 Reservation: Supported 00:22:38.894 Namespace Sharing Capabilities: Multiple Controllers 00:22:38.894 Size (in LBAs): 131072 (0GiB) 00:22:38.894 Capacity (in LBAs): 131072 (0GiB) 00:22:38.894 Utilization (in LBAs): 131072 (0GiB) 00:22:38.894 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:38.894 EUI64: ABCDEF0123456789 00:22:38.894 UUID: e447a8ed-c6da-4b93-904f-2efe403e3955 00:22:38.894 Thin Provisioning: Not Supported 00:22:38.894 Per-NS Atomic Units: Yes 00:22:38.894 Atomic Boundary Size (Normal): 0 00:22:38.894 Atomic Boundary Size (PFail): 0 00:22:38.894 Atomic Boundary Offset: 0 00:22:38.894 Maximum Single Source Range Length: 65535 00:22:38.894 Maximum Copy Length: 65535 00:22:38.894 Maximum Source Range Count: 1 00:22:38.894 NGUID/EUI64 Never Reused: No 00:22:38.894 Namespace Write Protected: No 00:22:38.894 Number of LBA Formats: 1 00:22:38.894 Current LBA Format: LBA Format #00 00:22:38.894 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:38.894 00:22:38.894 18:37:24 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:38.894 18:37:24 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:38.894 18:37:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.894 18:37:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:38.894 18:37:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.894 18:37:24 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:38.894 18:37:24 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:38.894 18:37:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:38.894 18:37:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:38.894 18:37:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:38.894 18:37:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:38.894 18:37:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:38.894 18:37:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:38.894 rmmod nvme_tcp 00:22:38.894 rmmod nvme_fabrics 00:22:38.894 rmmod nvme_keyring 00:22:38.894 18:37:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:38.894 18:37:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:38.894 18:37:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:38.894 18:37:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3996917 ']' 00:22:38.894 18:37:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3996917 00:22:38.894 18:37:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 3996917 ']' 00:22:38.894 18:37:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 3996917 00:22:38.894 18:37:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:22:38.894 18:37:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:38.894 18:37:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3996917 00:22:38.894 18:37:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:38.894 18:37:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:38.894 18:37:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3996917' 00:22:38.894 killing process with pid 3996917 00:22:38.894 18:37:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 3996917 00:22:38.894 18:37:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 3996917 00:22:39.154 18:37:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:39.154 18:37:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:39.154 18:37:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:39.154 18:37:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:39.154 18:37:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:39.154 18:37:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.154 18:37:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:39.154 18:37:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.721 18:37:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:41.721 00:22:41.721 real 0m9.505s 00:22:41.721 user 0m7.331s 00:22:41.721 sys 0m4.674s 00:22:41.721 18:37:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:41.721 18:37:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:41.721 ************************************ 00:22:41.721 END TEST nvmf_identify 00:22:41.721 ************************************ 00:22:41.721 18:37:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:41.721 18:37:26 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:41.721 18:37:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:41.721 18:37:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:41.721 18:37:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:41.721 ************************************ 00:22:41.721 START TEST nvmf_perf 00:22:41.721 ************************************ 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:41.721 * Looking for test storage... 00:22:41.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:41.721 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:41.722 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:41.722 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.722 18:37:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:41.722 18:37:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.722 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:41.722 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:41.722 18:37:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:22:41.722 18:37:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:46.989 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:46.989 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:46.989 Found net devices under 0000:86:00.0: cvl_0_0 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:46.989 Found net devices under 0000:86:00.1: cvl_0_1 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:46.989 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:47.248 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:47.248 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:47.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:47.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:22:47.248 00:22:47.248 --- 10.0.0.2 ping statistics --- 00:22:47.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.248 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:22:47.248 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:47.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:47.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:22:47.248 00:22:47.248 --- 10.0.0.1 ping statistics --- 00:22:47.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.248 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:22:47.248 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:47.248 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:22:47.248 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:47.248 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:47.248 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:47.248 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:47.248 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:47.248 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:47.248 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:47.248 18:37:32 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:47.248 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:47.248 18:37:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:47.248 18:37:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:47.248 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=4000629 00:22:47.248 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 4000629 00:22:47.248 18:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:47.248 18:37:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 4000629 ']' 00:22:47.248 18:37:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.248 18:37:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:47.248 18:37:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.248 18:37:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:47.248 18:37:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:47.248 [2024-07-15 18:37:32.674406] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:22:47.248 [2024-07-15 18:37:32.674448] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.248 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.248 [2024-07-15 18:37:32.746064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:47.506 [2024-07-15 18:37:32.820460] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.506 [2024-07-15 18:37:32.820498] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.506 [2024-07-15 18:37:32.820505] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:47.506 [2024-07-15 18:37:32.820510] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:47.506 [2024-07-15 18:37:32.820515] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.506 [2024-07-15 18:37:32.820632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.506 [2024-07-15 18:37:32.820747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.506 [2024-07-15 18:37:32.820856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.506 [2024-07-15 18:37:32.820858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:48.072 18:37:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:48.072 18:37:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:22:48.072 18:37:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:48.072 18:37:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:48.072 18:37:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:48.072 18:37:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.072 18:37:33 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:48.072 18:37:33 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:51.359 18:37:36 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:51.359 18:37:36 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:51.359 18:37:36 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:51.359 18:37:36 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:51.617 18:37:36 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:51.617 18:37:36 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:51.617 18:37:36 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:51.617 18:37:36 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:51.617 18:37:36 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:51.617 [2024-07-15 18:37:37.070963] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.617 18:37:37 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:51.875 18:37:37 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:51.875 18:37:37 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:52.133 18:37:37 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:52.133 18:37:37 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:52.133 18:37:37 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:52.391 [2024-07-15 18:37:37.787586] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.392 18:37:37 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:52.650 18:37:37 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:52.650 18:37:37 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:52.650 18:37:37 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:52.650 18:37:37 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:54.025 Initializing NVMe Controllers 00:22:54.025 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:54.025 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:54.025 Initialization complete. Launching workers. 00:22:54.025 ======================================================== 00:22:54.025 Latency(us) 00:22:54.025 Device Information : IOPS MiB/s Average min max 00:22:54.025 PCIE (0000:5e:00.0) NSID 1 from core 0: 99616.52 389.13 320.85 19.74 5221.88 00:22:54.025 ======================================================== 00:22:54.025 Total : 99616.52 389.13 320.85 19.74 5221.88 00:22:54.025 00:22:54.025 18:37:39 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:54.025 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.401 Initializing NVMe Controllers 00:22:55.401 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:55.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:55.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:55.401 Initialization complete. Launching workers. 00:22:55.401 ======================================================== 00:22:55.401 Latency(us) 00:22:55.401 Device Information : IOPS MiB/s Average min max 00:22:55.401 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 110.61 0.43 9276.10 105.03 45712.11 00:22:55.401 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 53.81 0.21 19026.86 7100.42 47884.59 00:22:55.401 ======================================================== 00:22:55.401 Total : 164.42 0.64 12467.26 105.03 47884.59 00:22:55.401 00:22:55.401 18:37:40 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:55.401 EAL: No free 2048 kB hugepages reported on node 1 00:22:56.776 Initializing NVMe Controllers 00:22:56.776 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:56.776 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:56.776 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:56.776 Initialization complete. Launching workers. 00:22:56.776 ======================================================== 00:22:56.776 Latency(us) 00:22:56.776 Device Information : IOPS MiB/s Average min max 00:22:56.776 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11559.93 45.16 2768.63 351.19 6330.71 00:22:56.776 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3822.10 14.93 8400.61 5487.73 15894.78 00:22:56.776 ======================================================== 00:22:56.776 Total : 15382.03 60.09 4168.05 351.19 15894.78 00:22:56.776 00:22:56.776 18:37:41 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:56.776 18:37:41 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:56.776 18:37:41 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:56.776 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.308 Initializing NVMe Controllers 00:22:59.308 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:59.308 Controller IO queue size 128, less than required. 00:22:59.308 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:59.308 Controller IO queue size 128, less than required. 00:22:59.308 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:59.309 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:59.309 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:59.309 Initialization complete. Launching workers. 00:22:59.309 ======================================================== 00:22:59.309 Latency(us) 00:22:59.309 Device Information : IOPS MiB/s Average min max 00:22:59.309 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2006.41 501.60 64721.28 41993.06 110347.02 00:22:59.309 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 594.82 148.71 222519.22 54968.69 333851.19 00:22:59.309 ======================================================== 00:22:59.309 Total : 2601.23 650.31 100804.97 41993.06 333851.19 00:22:59.309 00:22:59.309 18:37:44 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:59.309 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.309 No valid NVMe controllers or AIO or URING devices found 00:22:59.309 Initializing NVMe Controllers 00:22:59.309 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:59.309 Controller IO queue size 128, less than required. 00:22:59.309 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:59.309 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:59.309 Controller IO queue size 128, less than required. 00:22:59.309 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:59.309 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:59.309 WARNING: Some requested NVMe devices were skipped 00:22:59.309 18:37:44 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:59.309 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.838 Initializing NVMe Controllers 00:23:01.838 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:01.838 Controller IO queue size 128, less than required. 00:23:01.838 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:01.838 Controller IO queue size 128, less than required. 00:23:01.838 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:01.838 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:01.838 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:01.838 Initialization complete. Launching workers. 00:23:01.838 00:23:01.838 ==================== 00:23:01.838 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:01.838 TCP transport: 00:23:01.838 polls: 12734 00:23:01.838 idle_polls: 8481 00:23:01.838 sock_completions: 4253 00:23:01.838 nvme_completions: 6965 00:23:01.838 submitted_requests: 10386 00:23:01.838 queued_requests: 1 00:23:01.838 00:23:01.838 ==================== 00:23:01.838 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:01.838 TCP transport: 00:23:01.838 polls: 13287 00:23:01.838 idle_polls: 8397 00:23:01.838 sock_completions: 4890 00:23:01.838 nvme_completions: 7425 00:23:01.838 submitted_requests: 11146 00:23:01.838 queued_requests: 1 00:23:01.838 ======================================================== 00:23:01.838 Latency(us) 00:23:01.839 Device Information : IOPS MiB/s Average min max 00:23:01.839 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1739.17 434.79 74941.78 43435.90 119722.32 00:23:01.839 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1854.05 463.51 70206.31 31996.35 117988.28 00:23:01.839 ======================================================== 00:23:01.839 Total : 3593.22 898.30 72498.34 31996.35 119722.32 00:23:01.839 00:23:01.839 18:37:47 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:01.839 18:37:47 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:02.096 18:37:47 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:02.096 18:37:47 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:02.096 18:37:47 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:02.096 18:37:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:02.096 18:37:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:23:02.096 18:37:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:02.096 18:37:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:23:02.096 18:37:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:02.096 18:37:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:02.096 rmmod nvme_tcp 00:23:02.096 rmmod nvme_fabrics 00:23:02.096 rmmod nvme_keyring 00:23:02.096 18:37:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:02.096 18:37:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:23:02.096 18:37:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:23:02.096 18:37:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 4000629 ']' 00:23:02.096 18:37:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 4000629 00:23:02.096 18:37:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 4000629 ']' 00:23:02.096 18:37:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 4000629 00:23:02.096 18:37:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:23:02.096 18:37:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:02.096 18:37:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4000629 00:23:02.096 18:37:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:02.096 18:37:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:02.096 18:37:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4000629' 00:23:02.096 killing process with pid 4000629 00:23:02.096 18:37:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 4000629 00:23:02.096 18:37:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 4000629 00:23:04.618 18:37:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:04.618 18:37:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:04.618 18:37:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:04.618 18:37:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:04.618 18:37:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:04.618 18:37:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.618 18:37:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:04.618 18:37:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.540 18:37:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:06.540 00:23:06.540 real 0m24.894s 00:23:06.540 user 1m6.712s 00:23:06.540 sys 0m7.780s 00:23:06.540 18:37:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:06.540 18:37:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:06.540 ************************************ 00:23:06.540 END TEST nvmf_perf 00:23:06.540 ************************************ 00:23:06.540 18:37:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:06.540 18:37:51 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:06.540 18:37:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:06.540 18:37:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:06.540 18:37:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:06.540 ************************************ 00:23:06.540 START TEST nvmf_fio_host 00:23:06.540 ************************************ 00:23:06.540 18:37:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:06.540 * Looking for test storage... 00:23:06.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:06.540 18:37:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:06.540 18:37:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:06.540 18:37:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:06.540 18:37:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:06.541 18:37:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:11.809 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:11.809 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:11.809 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:11.810 Found net devices under 0000:86:00.0: cvl_0_0 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:11.810 Found net devices under 0000:86:00.1: cvl_0_1 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:11.810 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:12.069 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:12.069 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:12.069 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:12.069 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:12.069 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:12.069 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:12.069 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:12.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:12.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:23:12.069 00:23:12.069 --- 10.0.0.2 ping statistics --- 00:23:12.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.070 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:23:12.070 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:12.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:12.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:23:12.070 00:23:12.070 --- 10.0.0.1 ping statistics --- 00:23:12.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.070 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:23:12.070 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:12.070 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:23:12.070 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:12.070 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:12.070 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:12.070 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:12.070 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:12.070 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:12.070 18:37:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:12.070 18:37:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:12.070 18:37:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:12.070 18:37:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:12.070 18:37:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.070 18:37:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=4006789 00:23:12.070 18:37:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:12.070 18:37:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:12.070 18:37:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 4006789 00:23:12.070 18:37:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 4006789 ']' 00:23:12.070 18:37:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.070 18:37:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:12.070 18:37:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.070 18:37:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:12.070 18:37:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.070 [2024-07-15 18:37:57.600425] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:23:12.070 [2024-07-15 18:37:57.600468] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.070 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.329 [2024-07-15 18:37:57.669375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:12.329 [2024-07-15 18:37:57.752396] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.329 [2024-07-15 18:37:57.752433] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.329 [2024-07-15 18:37:57.752440] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:12.329 [2024-07-15 18:37:57.752446] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:12.329 [2024-07-15 18:37:57.752451] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.329 [2024-07-15 18:37:57.752508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.329 [2024-07-15 18:37:57.752667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.329 [2024-07-15 18:37:57.752541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.329 [2024-07-15 18:37:57.752669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:12.896 18:37:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:12.896 18:37:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:23:12.896 18:37:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:13.154 [2024-07-15 18:37:58.584680] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:13.154 18:37:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:13.154 18:37:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:13.154 18:37:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.154 18:37:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:13.412 Malloc1 00:23:13.412 18:37:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:13.671 18:37:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:13.671 18:37:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:13.929 [2024-07-15 18:37:59.338435] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:13.929 18:37:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:14.188 18:37:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:14.188 18:37:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:14.188 18:37:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:14.188 18:37:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:14.188 18:37:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:14.188 18:37:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:14.188 18:37:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:14.188 18:37:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:14.188 18:37:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:14.188 18:37:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:14.188 18:37:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:14.188 18:37:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:14.188 18:37:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:14.188 18:37:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:14.188 18:37:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:14.188 18:37:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:14.188 18:37:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:14.188 18:37:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:14.188 18:37:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:14.188 18:37:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:14.188 18:37:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:14.188 18:37:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:14.188 18:37:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:14.465 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:14.465 fio-3.35 00:23:14.465 Starting 1 thread 00:23:14.465 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.047 00:23:17.047 test: (groupid=0, jobs=1): err= 0: pid=4007383: Mon Jul 15 18:38:02 2024 00:23:17.047 read: IOPS=12.1k, BW=47.1MiB/s (49.4MB/s)(94.5MiB/2005msec) 00:23:17.047 slat (nsec): min=1564, max=250041, avg=1731.84, stdev=2242.78 00:23:17.047 clat (usec): min=3207, max=10291, avg=5857.75, stdev=445.79 00:23:17.047 lat (usec): min=3242, max=10293, avg=5859.48, stdev=445.77 00:23:17.047 clat percentiles (usec): 00:23:17.047 | 1.00th=[ 4817], 5.00th=[ 5145], 10.00th=[ 5342], 20.00th=[ 5538], 00:23:17.047 | 30.00th=[ 5669], 40.00th=[ 5735], 50.00th=[ 5866], 60.00th=[ 5997], 00:23:17.047 | 70.00th=[ 6063], 80.00th=[ 6194], 90.00th=[ 6390], 95.00th=[ 6521], 00:23:17.047 | 99.00th=[ 6849], 99.50th=[ 7046], 99.90th=[ 8160], 99.95th=[ 8979], 00:23:17.047 | 99.99th=[10290] 00:23:17.047 bw ( KiB/s): min=47336, max=48944, per=99.98%, avg=48232.00, stdev=766.97, samples=4 00:23:17.047 iops : min=11834, max=12236, avg=12058.00, stdev=191.74, samples=4 00:23:17.047 write: IOPS=12.0k, BW=46.9MiB/s (49.2MB/s)(94.1MiB/2005msec); 0 zone resets 00:23:17.047 slat (nsec): min=1602, max=236094, avg=1802.49, stdev=1693.42 00:23:17.047 clat (usec): min=2475, max=8862, avg=4729.97, stdev=366.57 00:23:17.047 lat (usec): min=2491, max=8863, avg=4731.77, stdev=366.60 00:23:17.047 clat percentiles (usec): 00:23:17.047 | 1.00th=[ 3851], 5.00th=[ 4178], 10.00th=[ 4293], 20.00th=[ 4424], 00:23:17.047 | 30.00th=[ 4555], 40.00th=[ 4621], 50.00th=[ 4752], 60.00th=[ 4817], 00:23:17.047 | 70.00th=[ 4883], 80.00th=[ 5014], 90.00th=[ 5145], 95.00th=[ 5276], 00:23:17.047 | 99.00th=[ 5538], 99.50th=[ 5735], 99.90th=[ 7242], 99.95th=[ 8225], 00:23:17.047 | 99.99th=[ 8848] 00:23:17.047 bw ( KiB/s): min=47744, max=48536, per=100.00%, avg=48052.00, stdev=339.07, samples=4 00:23:17.047 iops : min=11936, max=12134, avg=12013.00, stdev=84.77, samples=4 00:23:17.048 lat (msec) : 4=1.06%, 10=98.92%, 20=0.01% 00:23:17.048 cpu : usr=73.65%, sys=25.10%, ctx=129, majf=0, minf=6 00:23:17.048 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:17.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:17.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:17.048 issued rwts: total=24182,24087,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:17.048 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:17.048 00:23:17.048 Run status group 0 (all jobs): 00:23:17.048 READ: bw=47.1MiB/s (49.4MB/s), 47.1MiB/s-47.1MiB/s (49.4MB/s-49.4MB/s), io=94.5MiB (99.0MB), run=2005-2005msec 00:23:17.048 WRITE: bw=46.9MiB/s (49.2MB/s), 46.9MiB/s-46.9MiB/s (49.2MB/s-49.2MB/s), io=94.1MiB (98.7MB), run=2005-2005msec 00:23:17.048 18:38:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:17.048 18:38:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:17.048 18:38:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:17.048 18:38:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:17.048 18:38:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:17.048 18:38:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:17.048 18:38:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:17.048 18:38:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:17.048 18:38:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:17.048 18:38:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:17.048 18:38:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:17.048 18:38:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:17.048 18:38:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:17.048 18:38:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:17.048 18:38:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:17.048 18:38:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:17.048 18:38:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:17.048 18:38:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:17.048 18:38:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:17.048 18:38:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:17.048 18:38:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:17.048 18:38:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:17.048 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:17.048 fio-3.35 00:23:17.048 Starting 1 thread 00:23:17.048 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.578 00:23:19.578 test: (groupid=0, jobs=1): err= 0: pid=4007830: Mon Jul 15 18:38:04 2024 00:23:19.578 read: IOPS=10.9k, BW=170MiB/s (179MB/s)(348MiB/2044msec) 00:23:19.578 slat (nsec): min=2498, max=86684, avg=2832.41, stdev=1296.36 00:23:19.578 clat (usec): min=1511, max=51089, avg=6685.66, stdev=2704.90 00:23:19.578 lat (usec): min=1514, max=51093, avg=6688.49, stdev=2704.98 00:23:19.578 clat percentiles (usec): 00:23:19.578 | 1.00th=[ 3458], 5.00th=[ 4178], 10.00th=[ 4621], 20.00th=[ 5211], 00:23:19.578 | 30.00th=[ 5669], 40.00th=[ 6128], 50.00th=[ 6587], 60.00th=[ 7046], 00:23:19.578 | 70.00th=[ 7373], 80.00th=[ 7767], 90.00th=[ 8586], 95.00th=[ 9241], 00:23:19.578 | 99.00th=[11076], 99.50th=[11600], 99.90th=[49021], 99.95th=[50070], 00:23:19.578 | 99.99th=[51119] 00:23:19.578 bw ( KiB/s): min=88192, max=96480, per=52.61%, avg=91776.00, stdev=3707.86, samples=4 00:23:19.578 iops : min= 5512, max= 6030, avg=5736.00, stdev=231.74, samples=4 00:23:19.578 write: IOPS=6497, BW=102MiB/s (106MB/s)(188MiB/1847msec); 0 zone resets 00:23:19.578 slat (usec): min=29, max=382, avg=31.85, stdev= 7.58 00:23:19.578 clat (usec): min=4164, max=50938, avg=8693.20, stdev=3232.59 00:23:19.578 lat (usec): min=4195, max=50969, avg=8725.05, stdev=3233.30 00:23:19.578 clat percentiles (usec): 00:23:19.578 | 1.00th=[ 5604], 5.00th=[ 6325], 10.00th=[ 6718], 20.00th=[ 7242], 00:23:19.578 | 30.00th=[ 7635], 40.00th=[ 8029], 50.00th=[ 8356], 60.00th=[ 8848], 00:23:19.578 | 70.00th=[ 9110], 80.00th=[ 9634], 90.00th=[10421], 95.00th=[11207], 00:23:19.578 | 99.00th=[12649], 99.50th=[45351], 99.90th=[50070], 99.95th=[50594], 00:23:19.578 | 99.99th=[51119] 00:23:19.578 bw ( KiB/s): min=92160, max=99552, per=91.86%, avg=95496.00, stdev=3255.98, samples=4 00:23:19.578 iops : min= 5760, max= 6222, avg=5968.50, stdev=203.50, samples=4 00:23:19.578 lat (msec) : 2=0.09%, 4=2.01%, 10=90.91%, 20=6.61%, 50=0.28% 00:23:19.578 lat (msec) : 100=0.09% 00:23:19.578 cpu : usr=86.98%, sys=12.33%, ctx=41, majf=0, minf=3 00:23:19.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:19.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:19.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:19.578 issued rwts: total=22285,12001,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:19.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:19.578 00:23:19.578 Run status group 0 (all jobs): 00:23:19.578 READ: bw=170MiB/s (179MB/s), 170MiB/s-170MiB/s (179MB/s-179MB/s), io=348MiB (365MB), run=2044-2044msec 00:23:19.578 WRITE: bw=102MiB/s (106MB/s), 102MiB/s-102MiB/s (106MB/s-106MB/s), io=188MiB (197MB), run=1847-1847msec 00:23:19.578 18:38:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:19.578 18:38:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:19.578 18:38:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:19.578 18:38:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:19.578 18:38:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:19.578 18:38:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:19.578 18:38:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:23:19.578 18:38:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:19.578 18:38:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:23:19.578 18:38:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:19.578 18:38:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:19.578 rmmod nvme_tcp 00:23:19.578 rmmod nvme_fabrics 00:23:19.578 rmmod nvme_keyring 00:23:19.578 18:38:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:19.578 18:38:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:23:19.578 18:38:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:23:19.578 18:38:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 4006789 ']' 00:23:19.578 18:38:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 4006789 00:23:19.578 18:38:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 4006789 ']' 00:23:19.578 18:38:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 4006789 00:23:19.578 18:38:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:23:19.578 18:38:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:19.578 18:38:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4006789 00:23:19.578 18:38:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:19.578 18:38:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:19.578 18:38:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4006789' 00:23:19.578 killing process with pid 4006789 00:23:19.578 18:38:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 4006789 00:23:19.578 18:38:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 4006789 00:23:19.835 18:38:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:19.835 18:38:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:19.835 18:38:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:19.835 18:38:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:19.835 18:38:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:19.835 18:38:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.835 18:38:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:19.835 18:38:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.367 18:38:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:22.367 00:23:22.367 real 0m15.678s 00:23:22.367 user 0m46.674s 00:23:22.367 sys 0m6.236s 00:23:22.367 18:38:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:22.367 18:38:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.367 ************************************ 00:23:22.367 END TEST nvmf_fio_host 00:23:22.367 ************************************ 00:23:22.367 18:38:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:22.367 18:38:07 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:22.367 18:38:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:22.367 18:38:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:22.367 18:38:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:22.367 ************************************ 00:23:22.367 START TEST nvmf_failover 00:23:22.367 ************************************ 00:23:22.367 18:38:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:22.367 * Looking for test storage... 00:23:22.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:22.367 18:38:07 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:22.367 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:22.367 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:22.367 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:22.367 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:22.367 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:22.367 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:22.367 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:22.367 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:22.367 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:22.367 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:22.367 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:22.367 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:22.367 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:22.367 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:22.367 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:22.367 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:23:22.368 18:38:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:27.638 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:27.638 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:23:27.638 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:27.638 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:27.638 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:27.638 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:27.638 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:27.638 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:23:27.638 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:27.638 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:23:27.638 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:23:27.638 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:23:27.638 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:27.639 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:27.639 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:27.639 Found net devices under 0000:86:00.0: cvl_0_0 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:27.639 Found net devices under 0000:86:00.1: cvl_0_1 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:27.639 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:27.900 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:27.900 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:27.901 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:27.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:27.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:23:27.901 00:23:27.901 --- 10.0.0.2 ping statistics --- 00:23:27.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.901 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:23:27.901 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:27.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:27.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:23:27.901 00:23:27.901 --- 10.0.0.1 ping statistics --- 00:23:27.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.901 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:23:27.901 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:27.901 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:23:27.901 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:27.901 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:27.901 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:27.901 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:27.901 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:27.901 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:27.901 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:27.901 18:38:13 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:27.901 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:27.901 18:38:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:27.901 18:38:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:27.901 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=4011703 00:23:27.901 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:27.901 18:38:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 4011703 00:23:27.901 18:38:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 4011703 ']' 00:23:27.901 18:38:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.901 18:38:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:27.901 18:38:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.901 18:38:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:27.901 18:38:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:27.901 [2024-07-15 18:38:13.379311] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:23:27.901 [2024-07-15 18:38:13.379356] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.901 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.902 [2024-07-15 18:38:13.450040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:28.166 [2024-07-15 18:38:13.521274] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.166 [2024-07-15 18:38:13.521316] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.166 [2024-07-15 18:38:13.521323] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.166 [2024-07-15 18:38:13.521329] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.166 [2024-07-15 18:38:13.521333] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.166 [2024-07-15 18:38:13.521467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.166 [2024-07-15 18:38:13.521575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.166 [2024-07-15 18:38:13.521576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:28.731 18:38:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:28.731 18:38:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:28.731 18:38:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:28.731 18:38:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:28.731 18:38:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:28.731 18:38:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.731 18:38:14 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:28.989 [2024-07-15 18:38:14.385744] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.989 18:38:14 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:29.248 Malloc0 00:23:29.248 18:38:14 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:29.248 18:38:14 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:29.506 18:38:14 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:29.764 [2024-07-15 18:38:15.098142] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.764 18:38:15 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:29.764 [2024-07-15 18:38:15.266630] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:29.764 18:38:15 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:30.022 [2024-07-15 18:38:15.435181] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:30.022 18:38:15 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=4011994 00:23:30.022 18:38:15 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:30.022 18:38:15 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:30.022 18:38:15 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 4011994 /var/tmp/bdevperf.sock 00:23:30.022 18:38:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 4011994 ']' 00:23:30.022 18:38:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:30.022 18:38:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:30.022 18:38:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:30.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:30.022 18:38:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:30.022 18:38:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:30.957 18:38:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:30.957 18:38:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:30.957 18:38:16 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:31.215 NVMe0n1 00:23:31.215 18:38:16 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:31.473 00:23:31.473 18:38:16 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:31.473 18:38:16 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=4012215 00:23:31.473 18:38:16 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:32.407 18:38:17 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:32.665 [2024-07-15 18:38:18.012487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4080 is same with the state(5) to be set 00:23:32.665 [2024-07-15 18:38:18.012564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4080 is same with the state(5) to be set 00:23:32.665 [2024-07-15 18:38:18.012572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4080 is same with the state(5) to be set 00:23:32.665 [2024-07-15 18:38:18.012579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4080 is same with the state(5) to be set 00:23:32.665 [2024-07-15 18:38:18.012584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4080 is same with the state(5) to be set 00:23:32.665 [2024-07-15 18:38:18.012590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4080 is same with the state(5) to be set 00:23:32.665 [2024-07-15 18:38:18.012596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4080 is same with the state(5) to be set 00:23:32.665 [2024-07-15 18:38:18.012603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4080 is same with the state(5) to be set 00:23:32.665 [2024-07-15 18:38:18.012609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4080 is same with the state(5) to be set 00:23:32.665 [2024-07-15 18:38:18.012615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4080 is same with the state(5) to be set 00:23:32.665 [2024-07-15 18:38:18.012620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4080 is same with the state(5) to be set 00:23:32.665 [2024-07-15 18:38:18.012626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4080 is same with the state(5) to be set 00:23:32.665 [2024-07-15 18:38:18.012632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4080 is same with the state(5) to be set 00:23:32.665 [2024-07-15 18:38:18.012638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4080 is same with the state(5) to be set 00:23:32.665 [2024-07-15 18:38:18.012644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4080 is same with the state(5) to be set 00:23:32.665 [2024-07-15 18:38:18.012650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4080 is same with the state(5) to be set 00:23:32.665 [2024-07-15 18:38:18.012660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4080 is same with the state(5) to be set 00:23:32.665 [2024-07-15 18:38:18.012667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4080 is same with the state(5) to be set 00:23:32.665 [2024-07-15 18:38:18.012672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4080 is same with the state(5) to be set 00:23:32.666 [2024-07-15 18:38:18.012678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4080 is same with the state(5) to be set 00:23:32.666 [2024-07-15 18:38:18.012684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4080 is same with the state(5) to be set 00:23:32.666 [2024-07-15 18:38:18.012690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4080 is same with the state(5) to be set 00:23:32.666 [2024-07-15 18:38:18.012695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4080 is same with the state(5) to be set 00:23:32.666 [2024-07-15 18:38:18.012701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4080 is same with the state(5) to be set 00:23:32.666 [2024-07-15 18:38:18.012707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4080 is same with the state(5) to be set 00:23:32.666 [2024-07-15 18:38:18.012712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4080 is same with the state(5) to be set 00:23:32.666 [2024-07-15 18:38:18.012718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4080 is same with the state(5) to be set 00:23:32.666 [2024-07-15 18:38:18.012724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4080 is same with the state(5) to be set 00:23:32.666 [2024-07-15 18:38:18.012729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4080 is same with the state(5) to be set 00:23:32.666 [2024-07-15 18:38:18.012735] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4080 is same with the state(5) to be set 00:23:32.666 [2024-07-15 18:38:18.012740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4080 is same with the state(5) to be set 00:23:32.666 18:38:18 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:35.946 18:38:21 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:35.946 00:23:35.946 18:38:21 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:36.203 18:38:21 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:39.484 18:38:24 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:39.484 [2024-07-15 18:38:24.685173] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.484 18:38:24 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:40.417 18:38:25 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:40.417 [2024-07-15 18:38:25.890098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5aa0 is same with the state(5) to be set 00:23:40.417 [2024-07-15 18:38:25.890141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5aa0 is same with the state(5) to be set 00:23:40.417 [2024-07-15 18:38:25.890148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5aa0 is same with the state(5) to be set 00:23:40.417 [2024-07-15 18:38:25.890154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5aa0 is same with the state(5) to be set 00:23:40.417 [2024-07-15 18:38:25.890166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5aa0 is same with the state(5) to be set 00:23:40.417 [2024-07-15 18:38:25.890172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5aa0 is same with the state(5) to be set 00:23:40.417 [2024-07-15 18:38:25.890177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5aa0 is same with the state(5) to be set 00:23:40.417 [2024-07-15 18:38:25.890183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5aa0 is same with the state(5) to be set 00:23:40.417 [2024-07-15 18:38:25.890189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5aa0 is same with the state(5) to be set 00:23:40.417 [2024-07-15 18:38:25.890194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5aa0 is same with the state(5) to be set 00:23:40.417 [2024-07-15 18:38:25.890200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5aa0 is same with the state(5) to be set 00:23:40.417 [2024-07-15 18:38:25.890205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5aa0 is same with the state(5) to be set 00:23:40.417 [2024-07-15 18:38:25.890211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5aa0 is same with the state(5) to be set 00:23:40.417 [2024-07-15 18:38:25.890216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5aa0 is same with the state(5) to be set 00:23:40.417 [2024-07-15 18:38:25.890222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5aa0 is same with the state(5) to be set 00:23:40.417 [2024-07-15 18:38:25.890227] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5aa0 is same with the state(5) to be set 00:23:40.417 [2024-07-15 18:38:25.890233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5aa0 is same with the state(5) to be set 00:23:40.417 [2024-07-15 18:38:25.890238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5aa0 is same with the state(5) to be set 00:23:40.417 [2024-07-15 18:38:25.890243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5aa0 is same with the state(5) to be set 00:23:40.417 [2024-07-15 18:38:25.890249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5aa0 is same with the state(5) to be set 00:23:40.417 [2024-07-15 18:38:25.890255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5aa0 is same with the state(5) to be set 00:23:40.417 [2024-07-15 18:38:25.890260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5aa0 is same with the state(5) to be set 00:23:40.417 [2024-07-15 18:38:25.890265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5aa0 is same with the state(5) to be set 00:23:40.417 [2024-07-15 18:38:25.890271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5aa0 is same with the state(5) to be set 00:23:40.417 18:38:25 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 4012215 00:23:47.020 0 00:23:47.020 18:38:31 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 4011994 00:23:47.020 18:38:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 4011994 ']' 00:23:47.020 18:38:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 4011994 00:23:47.020 18:38:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:47.020 18:38:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:47.020 18:38:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4011994 00:23:47.020 18:38:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:47.020 18:38:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:47.020 18:38:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4011994' 00:23:47.020 killing process with pid 4011994 00:23:47.020 18:38:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 4011994 00:23:47.020 18:38:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 4011994 00:23:47.020 18:38:32 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:47.020 [2024-07-15 18:38:15.506297] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:23:47.020 [2024-07-15 18:38:15.506355] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4011994 ] 00:23:47.020 EAL: No free 2048 kB hugepages reported on node 1 00:23:47.020 [2024-07-15 18:38:15.573797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.020 [2024-07-15 18:38:15.648564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.020 Running I/O for 15 seconds... 00:23:47.020 [2024-07-15 18:38:18.013529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:101104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 18:38:18.013567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 18:38:18.013584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 18:38:18.013592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 18:38:18.013600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 18:38:18.013607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 18:38:18.013616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 18:38:18.013622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 18:38:18.013630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 18:38:18.013637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 18:38:18.013646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:101144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 18:38:18.013652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 18:38:18.013660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 18:38:18.013666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 18:38:18.013674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 18:38:18.013681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 18:38:18.013689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 18:38:18.013695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 18:38:18.013703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:101176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 18:38:18.013709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 18:38:18.013717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:101184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 18:38:18.013724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 18:38:18.013737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 18:38:18.013744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 18:38:18.013752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:101200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 18:38:18.013759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 18:38:18.013767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:101208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 18:38:18.013773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 18:38:18.013781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:101216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 18:38:18.013787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 18:38:18.013795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 18:38:18.013801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 18:38:18.013809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:101232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 18:38:18.013816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 18:38:18.013824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 18:38:18.013830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 18:38:18.013838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 18:38:18.013844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 18:38:18.013852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 18:38:18.013858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 18:38:18.013866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 18:38:18.013872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 18:38:18.013880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 18:38:18.013887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 18:38:18.013894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 18:38:18.013900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 18:38:18.013909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 18:38:18.013917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 18:38:18.013926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.020 [2024-07-15 18:38:18.013932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.020 [2024-07-15 18:38:18.013940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.013946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.013954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.013960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.013968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.013974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.013982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.013988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.013997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:101368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:101424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.021 [2024-07-15 18:38:18.014530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.021 [2024-07-15 18:38:18.014538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.022 [2024-07-15 18:38:18.014545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.014553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.022 [2024-07-15 18:38:18.014560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.014567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:101648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.022 [2024-07-15 18:38:18.014574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.014582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.022 [2024-07-15 18:38:18.014588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.014596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.022 [2024-07-15 18:38:18.014602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.014610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.022 [2024-07-15 18:38:18.014616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.014624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.022 [2024-07-15 18:38:18.014631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.014639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.022 [2024-07-15 18:38:18.014647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.014655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.022 [2024-07-15 18:38:18.014661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.014670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.022 [2024-07-15 18:38:18.014676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.014684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.022 [2024-07-15 18:38:18.014690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.014698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.022 [2024-07-15 18:38:18.014704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.014712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.022 [2024-07-15 18:38:18.014719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.014726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.022 [2024-07-15 18:38:18.014733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.014741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:101744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.022 [2024-07-15 18:38:18.014747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.014756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.022 [2024-07-15 18:38:18.014762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.014770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.022 [2024-07-15 18:38:18.014776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.014785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.022 [2024-07-15 18:38:18.014791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.014799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.022 [2024-07-15 18:38:18.014806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.014814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.022 [2024-07-15 18:38:18.014820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.014830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.022 [2024-07-15 18:38:18.014837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.014845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.022 [2024-07-15 18:38:18.014851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.014859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.022 [2024-07-15 18:38:18.014865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.014874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.022 [2024-07-15 18:38:18.014880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.014888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.022 [2024-07-15 18:38:18.014895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.014903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.022 [2024-07-15 18:38:18.014909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.014917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.022 [2024-07-15 18:38:18.014923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.014932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.022 [2024-07-15 18:38:18.014938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.014946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.022 [2024-07-15 18:38:18.014953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.014961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.022 [2024-07-15 18:38:18.014968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.014976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.022 [2024-07-15 18:38:18.014983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.014991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.022 [2024-07-15 18:38:18.014998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.015005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.022 [2024-07-15 18:38:18.015013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.015021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.022 [2024-07-15 18:38:18.015028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.015036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.022 [2024-07-15 18:38:18.015042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.015051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.022 [2024-07-15 18:38:18.015057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.015065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.022 [2024-07-15 18:38:18.015071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.015079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.022 [2024-07-15 18:38:18.015085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.015093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.022 [2024-07-15 18:38:18.015099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.015107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.022 [2024-07-15 18:38:18.015114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.015121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.022 [2024-07-15 18:38:18.015128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.015135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.022 [2024-07-15 18:38:18.015142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.022 [2024-07-15 18:38:18.015149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:101968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.022 [2024-07-15 18:38:18.015156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:18.015163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.023 [2024-07-15 18:38:18.015170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:18.015178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.023 [2024-07-15 18:38:18.015184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:18.015192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.023 [2024-07-15 18:38:18.015200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:18.015207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:102000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.023 [2024-07-15 18:38:18.015222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:18.015231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:102008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.023 [2024-07-15 18:38:18.015237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:18.015245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.023 [2024-07-15 18:38:18.015251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:18.015259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.023 [2024-07-15 18:38:18.015265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:18.015273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.023 [2024-07-15 18:38:18.015279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:18.015288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.023 [2024-07-15 18:38:18.015295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:18.015303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.023 [2024-07-15 18:38:18.015310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:18.015318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.023 [2024-07-15 18:38:18.015325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:18.015332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.023 [2024-07-15 18:38:18.015344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:18.015352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:102072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.023 [2024-07-15 18:38:18.015358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:18.015366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.023 [2024-07-15 18:38:18.015373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:18.015381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.023 [2024-07-15 18:38:18.015388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:18.015398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:102096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.023 [2024-07-15 18:38:18.015404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:18.015412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:102104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.023 [2024-07-15 18:38:18.015418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:18.015426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:102112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.023 [2024-07-15 18:38:18.015432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:18.015450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.023 [2024-07-15 18:38:18.015457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.023 [2024-07-15 18:38:18.015463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102120 len:8 PRP1 0x0 PRP2 0x0 00:23:47.023 [2024-07-15 18:38:18.015471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:18.015513] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10be300 was disconnected and freed. reset controller. 00:23:47.023 [2024-07-15 18:38:18.015522] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:47.023 [2024-07-15 18:38:18.015541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.023 [2024-07-15 18:38:18.015549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:18.015556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.023 [2024-07-15 18:38:18.015563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:18.015570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.023 [2024-07-15 18:38:18.015576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:18.015584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.023 [2024-07-15 18:38:18.015590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:18.015597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.023 [2024-07-15 18:38:18.018390] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.023 [2024-07-15 18:38:18.018418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a0540 (9): Bad file descriptor 00:23:47.023 [2024-07-15 18:38:18.046727] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:47.023 [2024-07-15 18:38:21.494253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.023 [2024-07-15 18:38:21.494299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:21.494315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:37720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.023 [2024-07-15 18:38:21.494327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:21.494335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.023 [2024-07-15 18:38:21.494348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:21.494356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.023 [2024-07-15 18:38:21.494363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:21.494371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:37744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.023 [2024-07-15 18:38:21.494377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:21.494385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.023 [2024-07-15 18:38:21.494391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:21.494399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.023 [2024-07-15 18:38:21.494405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:21.494413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.023 [2024-07-15 18:38:21.494419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:21.494427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:37776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.023 [2024-07-15 18:38:21.494433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:21.494441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:37784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.023 [2024-07-15 18:38:21.494447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:21.494455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.023 [2024-07-15 18:38:21.494461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:21.494469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.023 [2024-07-15 18:38:21.494475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:21.494483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.023 [2024-07-15 18:38:21.494489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:21.494497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.023 [2024-07-15 18:38:21.494504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:21.494514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.023 [2024-07-15 18:38:21.494520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:21.494528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.023 [2024-07-15 18:38:21.494534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.023 [2024-07-15 18:38:21.494542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:38336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.023 [2024-07-15 18:38:21.494548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.494557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.494563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.494571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:38352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.494577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.494585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.494591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.494599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:38368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.494605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.494613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.494620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.494627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.494633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.494641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:38392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.494648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.494656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.494662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.494670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.494677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.494685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.494693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.494701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.494707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.494714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.494721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.494728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.494735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.494742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.494748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.494756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:38456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.494762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.494769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:38464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.494776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.494784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.494790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.494798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.494804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.494812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.494818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.494825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.494832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.494840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.494846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.494853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.494860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.494867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.494875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.494882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.494888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.494896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:38536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.494903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.494911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.494917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.494925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.494931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.494939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.494945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.494952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.494958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.494966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.494972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.494980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.494986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.494994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.495000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.495008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.495014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.495021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.495027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.024 [2024-07-15 18:38:21.495035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.024 [2024-07-15 18:38:21.495041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.025 [2024-07-15 18:38:21.495057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.025 [2024-07-15 18:38:21.495072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.025 [2024-07-15 18:38:21.495085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.025 [2024-07-15 18:38:21.495100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:38656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.025 [2024-07-15 18:38:21.495114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.025 [2024-07-15 18:38:21.495128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.025 [2024-07-15 18:38:21.495141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.025 [2024-07-15 18:38:21.495155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.025 [2024-07-15 18:38:21.495169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.025 [2024-07-15 18:38:21.495183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.025 [2024-07-15 18:38:21.495196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.025 [2024-07-15 18:38:21.495210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.025 [2024-07-15 18:38:21.495225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:37792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.025 [2024-07-15 18:38:21.495240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.025 [2024-07-15 18:38:21.495254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.025 [2024-07-15 18:38:21.495268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:37816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.025 [2024-07-15 18:38:21.495281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.025 [2024-07-15 18:38:21.495295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.025 [2024-07-15 18:38:21.495309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:37840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.025 [2024-07-15 18:38:21.495323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:37848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.025 [2024-07-15 18:38:21.495340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.025 [2024-07-15 18:38:21.495355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.025 [2024-07-15 18:38:21.495368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.025 [2024-07-15 18:38:21.495382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.025 [2024-07-15 18:38:21.495396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.025 [2024-07-15 18:38:21.495412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.025 [2024-07-15 18:38:21.495426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:37904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.025 [2024-07-15 18:38:21.495441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.025 [2024-07-15 18:38:21.495454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.025 [2024-07-15 18:38:21.495470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.025 [2024-07-15 18:38:21.495484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:37936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.025 [2024-07-15 18:38:21.495498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:37944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.025 [2024-07-15 18:38:21.495511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:37952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.025 [2024-07-15 18:38:21.495525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:37960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.025 [2024-07-15 18:38:21.495539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.025 [2024-07-15 18:38:21.495552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.025 [2024-07-15 18:38:21.495566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:37976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.025 [2024-07-15 18:38:21.495580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.025 [2024-07-15 18:38:21.495595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:37992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.025 [2024-07-15 18:38:21.495609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.025 [2024-07-15 18:38:21.495624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.025 [2024-07-15 18:38:21.495638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.025 [2024-07-15 18:38:21.495646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.025 [2024-07-15 18:38:21.495652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.495660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:21.495666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.495674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:21.495680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.495688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:21.495694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.495702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:21.495709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.495717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:21.495723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.495731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:21.495738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.495745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:38072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:21.495751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.495759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:38080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:21.495767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.495775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:21.495781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.495789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:21.495795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.495802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:21.495808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.495816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:21.495822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.495830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:21.495836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.495844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:21.495850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.495858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:38136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:21.495864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.495871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:21.495878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.495886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:21.495892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.495899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:21.495905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.495914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:21.495920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.495928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:21.495934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.495946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:21.495952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.495960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:21.495966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.495974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:21.495980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.495988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:21.495994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.496002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:21.496009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.496017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:21.496023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.496031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:21.496037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.496045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:21.496052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.496059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:21.496066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.496073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:21.496080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.496087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:21.496093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.496101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:21.496108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.496128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.026 [2024-07-15 18:38:21.496134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.026 [2024-07-15 18:38:21.496141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38280 len:8 PRP1 0x0 PRP2 0x0 00:23:47.026 [2024-07-15 18:38:21.496147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.496189] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x126b170 was disconnected and freed. reset controller. 00:23:47.026 [2024-07-15 18:38:21.496198] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:47.026 [2024-07-15 18:38:21.496217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.026 [2024-07-15 18:38:21.496224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.496231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.026 [2024-07-15 18:38:21.496237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.496244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.026 [2024-07-15 18:38:21.496250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.496257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.026 [2024-07-15 18:38:21.496263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:21.496269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.026 [2024-07-15 18:38:21.499028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.026 [2024-07-15 18:38:21.499061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a0540 (9): Bad file descriptor 00:23:47.026 [2024-07-15 18:38:21.576538] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:47.026 [2024-07-15 18:38:25.890433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:25.890468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:25.890484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:74592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.026 [2024-07-15 18:38:25.890492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.026 [2024-07-15 18:38:25.890501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.027 [2024-07-15 18:38:25.890507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:74608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.027 [2024-07-15 18:38:25.890522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.027 [2024-07-15 18:38:25.890536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.027 [2024-07-15 18:38:25.890556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.027 [2024-07-15 18:38:25.890570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:74640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.027 [2024-07-15 18:38:25.890584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.027 [2024-07-15 18:38:25.890599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.027 [2024-07-15 18:38:25.890614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.027 [2024-07-15 18:38:25.890627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.027 [2024-07-15 18:38:25.890642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.027 [2024-07-15 18:38:25.890656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.027 [2024-07-15 18:38:25.890669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.027 [2024-07-15 18:38:25.890683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.027 [2024-07-15 18:38:25.890697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.027 [2024-07-15 18:38:25.890711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.027 [2024-07-15 18:38:25.890725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.027 [2024-07-15 18:38:25.890741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.027 [2024-07-15 18:38:25.890754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.027 [2024-07-15 18:38:25.890768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.027 [2024-07-15 18:38:25.890783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.027 [2024-07-15 18:38:25.890798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.027 [2024-07-15 18:38:25.890812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.027 [2024-07-15 18:38:25.890826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:74664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.027 [2024-07-15 18:38:25.890840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.027 [2024-07-15 18:38:25.890854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:74680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.027 [2024-07-15 18:38:25.890869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.027 [2024-07-15 18:38:25.890883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.027 [2024-07-15 18:38:25.890897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:74704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.027 [2024-07-15 18:38:25.890912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.027 [2024-07-15 18:38:25.890926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.027 [2024-07-15 18:38:25.890940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.027 [2024-07-15 18:38:25.890954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.027 [2024-07-15 18:38:25.890967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.027 [2024-07-15 18:38:25.890982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.890989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.027 [2024-07-15 18:38:25.890996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.891003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.027 [2024-07-15 18:38:25.891009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.891016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.027 [2024-07-15 18:38:25.891023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.891031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.027 [2024-07-15 18:38:25.891037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.891044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.027 [2024-07-15 18:38:25.891050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.891058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.027 [2024-07-15 18:38:25.891064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.891071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.027 [2024-07-15 18:38:25.891078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.891087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.027 [2024-07-15 18:38:25.891094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.891101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.027 [2024-07-15 18:38:25.891107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.027 [2024-07-15 18:38:25.891115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.028 [2024-07-15 18:38:25.891121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.028 [2024-07-15 18:38:25.891134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.028 [2024-07-15 18:38:25.891148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.028 [2024-07-15 18:38:25.891162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.028 [2024-07-15 18:38:25.891176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.028 [2024-07-15 18:38:25.891191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.028 [2024-07-15 18:38:25.891204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.028 [2024-07-15 18:38:25.891218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.028 [2024-07-15 18:38:25.891231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.028 [2024-07-15 18:38:25.891245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:74712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.028 [2024-07-15 18:38:25.891260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.028 [2024-07-15 18:38:25.891275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:74728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.028 [2024-07-15 18:38:25.891289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.028 [2024-07-15 18:38:25.891303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:74744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.028 [2024-07-15 18:38:25.891317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:74752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.028 [2024-07-15 18:38:25.891331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.028 [2024-07-15 18:38:25.891352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.028 [2024-07-15 18:38:25.891366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.028 [2024-07-15 18:38:25.891380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.028 [2024-07-15 18:38:25.891395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.028 [2024-07-15 18:38:25.891409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.028 [2024-07-15 18:38:25.891423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.028 [2024-07-15 18:38:25.891437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.028 [2024-07-15 18:38:25.891452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.028 [2024-07-15 18:38:25.891466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.028 [2024-07-15 18:38:25.891479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.028 [2024-07-15 18:38:25.891493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.028 [2024-07-15 18:38:25.891507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.028 [2024-07-15 18:38:25.891521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.028 [2024-07-15 18:38:25.891534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.028 [2024-07-15 18:38:25.891548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.028 [2024-07-15 18:38:25.891562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.028 [2024-07-15 18:38:25.891575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.028 [2024-07-15 18:38:25.891589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.028 [2024-07-15 18:38:25.891602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.028 [2024-07-15 18:38:25.891624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.028 [2024-07-15 18:38:25.891633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.028 [2024-07-15 18:38:25.891639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.029 [2024-07-15 18:38:25.891647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.029 [2024-07-15 18:38:25.891653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.029 [2024-07-15 18:38:25.891661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.029 [2024-07-15 18:38:25.891667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.029 [2024-07-15 18:38:25.891675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.029 [2024-07-15 18:38:25.891681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.029 [2024-07-15 18:38:25.891688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.029 [2024-07-15 18:38:25.891694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.029 [2024-07-15 18:38:25.891702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.029 [2024-07-15 18:38:25.891708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.029 [2024-07-15 18:38:25.891715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.029 [2024-07-15 18:38:25.891722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.029 [2024-07-15 18:38:25.891729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.029 [2024-07-15 18:38:25.891735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.029 [2024-07-15 18:38:25.891743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.029 [2024-07-15 18:38:25.891749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.029 [2024-07-15 18:38:25.891756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.029 [2024-07-15 18:38:25.891762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.029 [2024-07-15 18:38:25.891770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.029 [2024-07-15 18:38:25.891776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.029 [2024-07-15 18:38:25.891783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.029 [2024-07-15 18:38:25.891789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.029 [2024-07-15 18:38:25.891797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.029 [2024-07-15 18:38:25.891804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.029 [2024-07-15 18:38:25.891812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.029 [2024-07-15 18:38:25.891818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.029 [2024-07-15 18:38:25.891826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.029 [2024-07-15 18:38:25.891832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.029 [2024-07-15 18:38:25.891854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.029 [2024-07-15 18:38:25.891862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75352 len:8 PRP1 0x0 PRP2 0x0 00:23:47.029 [2024-07-15 18:38:25.891869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.029 [2024-07-15 18:38:25.891877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.029 [2024-07-15 18:38:25.891882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.029 [2024-07-15 18:38:25.891887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75360 len:8 PRP1 0x0 PRP2 0x0 00:23:47.029 [2024-07-15 18:38:25.891893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.029 [2024-07-15 18:38:25.891900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.029 [2024-07-15 18:38:25.891905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.029 [2024-07-15 18:38:25.891910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75368 len:8 PRP1 0x0 PRP2 0x0 00:23:47.029 [2024-07-15 18:38:25.891916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.029 [2024-07-15 18:38:25.891922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.029 [2024-07-15 18:38:25.891927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.029 [2024-07-15 18:38:25.891932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75376 len:8 PRP1 0x0 PRP2 0x0 00:23:47.029 [2024-07-15 18:38:25.891938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.029 [2024-07-15 18:38:25.891944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.029 [2024-07-15 18:38:25.891949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.029 [2024-07-15 18:38:25.891954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75384 len:8 PRP1 0x0 PRP2 0x0 00:23:47.029 [2024-07-15 18:38:25.891960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.029 [2024-07-15 18:38:25.891966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.029 [2024-07-15 18:38:25.891971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.029 [2024-07-15 18:38:25.891976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75392 len:8 PRP1 0x0 PRP2 0x0 00:23:47.029 [2024-07-15 18:38:25.891982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.029 [2024-07-15 18:38:25.891988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.029 [2024-07-15 18:38:25.891993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.029 [2024-07-15 18:38:25.892000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75400 len:8 PRP1 0x0 PRP2 0x0 00:23:47.029 [2024-07-15 18:38:25.892006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.029 [2024-07-15 18:38:25.892013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.029 [2024-07-15 18:38:25.892019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.029 [2024-07-15 18:38:25.892026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75408 len:8 PRP1 0x0 PRP2 0x0 00:23:47.029 [2024-07-15 18:38:25.892033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.029 [2024-07-15 18:38:25.892041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.029 [2024-07-15 18:38:25.892050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.029 [2024-07-15 18:38:25.892056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75416 len:8 PRP1 0x0 PRP2 0x0 00:23:47.029 [2024-07-15 18:38:25.892066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.029 [2024-07-15 18:38:25.892073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.029 [2024-07-15 18:38:25.892078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.029 [2024-07-15 18:38:25.892083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75424 len:8 PRP1 0x0 PRP2 0x0 00:23:47.029 [2024-07-15 18:38:25.892089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.029 [2024-07-15 18:38:25.892096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.029 [2024-07-15 18:38:25.892101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.029 [2024-07-15 18:38:25.892106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75432 len:8 PRP1 0x0 PRP2 0x0 00:23:47.029 [2024-07-15 18:38:25.892112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.029 [2024-07-15 18:38:25.892119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.029 [2024-07-15 18:38:25.892123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.029 [2024-07-15 18:38:25.892129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75440 len:8 PRP1 0x0 PRP2 0x0 00:23:47.029 [2024-07-15 18:38:25.892134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.029 [2024-07-15 18:38:25.892141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.029 [2024-07-15 18:38:25.892146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.029 [2024-07-15 18:38:25.892151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75448 len:8 PRP1 0x0 PRP2 0x0 00:23:47.029 [2024-07-15 18:38:25.892158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.029 [2024-07-15 18:38:25.892163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.029 [2024-07-15 18:38:25.892169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.029 [2024-07-15 18:38:25.892174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75456 len:8 PRP1 0x0 PRP2 0x0 00:23:47.029 [2024-07-15 18:38:25.892180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.029 [2024-07-15 18:38:25.892188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.029 [2024-07-15 18:38:25.892193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.029 [2024-07-15 18:38:25.892198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75464 len:8 PRP1 0x0 PRP2 0x0 00:23:47.029 [2024-07-15 18:38:25.892203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.029 [2024-07-15 18:38:25.892210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.029 [2024-07-15 18:38:25.892215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.029 [2024-07-15 18:38:25.892221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75472 len:8 PRP1 0x0 PRP2 0x0 00:23:47.029 [2024-07-15 18:38:25.892228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.029 [2024-07-15 18:38:25.892235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.029 [2024-07-15 18:38:25.892240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.029 [2024-07-15 18:38:25.892247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75480 len:8 PRP1 0x0 PRP2 0x0 00:23:47.029 [2024-07-15 18:38:25.892257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.029 [2024-07-15 18:38:25.892265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.030 [2024-07-15 18:38:25.892272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.030 [2024-07-15 18:38:25.892278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75488 len:8 PRP1 0x0 PRP2 0x0 00:23:47.030 [2024-07-15 18:38:25.892285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.030 [2024-07-15 18:38:25.892293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.030 [2024-07-15 18:38:25.892298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.030 [2024-07-15 18:38:25.892304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75496 len:8 PRP1 0x0 PRP2 0x0 00:23:47.030 [2024-07-15 18:38:25.892311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.030 [2024-07-15 18:38:25.892318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.030 [2024-07-15 18:38:25.892322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.030 [2024-07-15 18:38:25.892328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75504 len:8 PRP1 0x0 PRP2 0x0 00:23:47.030 [2024-07-15 18:38:25.892334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.030 [2024-07-15 18:38:25.892344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.030 [2024-07-15 18:38:25.892349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.030 [2024-07-15 18:38:25.892354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75512 len:8 PRP1 0x0 PRP2 0x0 00:23:47.030 [2024-07-15 18:38:25.892360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.030 [2024-07-15 18:38:25.892366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.030 [2024-07-15 18:38:25.892371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.030 [2024-07-15 18:38:25.892376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75520 len:8 PRP1 0x0 PRP2 0x0 00:23:47.030 [2024-07-15 18:38:25.892385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.030 [2024-07-15 18:38:25.892392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.030 [2024-07-15 18:38:25.892396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.030 [2024-07-15 18:38:25.892402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75528 len:8 PRP1 0x0 PRP2 0x0 00:23:47.030 [2024-07-15 18:38:25.903043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.030 [2024-07-15 18:38:25.903055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.030 [2024-07-15 18:38:25.903060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.030 [2024-07-15 18:38:25.903067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75536 len:8 PRP1 0x0 PRP2 0x0 00:23:47.030 [2024-07-15 18:38:25.903073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.030 [2024-07-15 18:38:25.903079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.030 [2024-07-15 18:38:25.903084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.030 [2024-07-15 18:38:25.903089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75544 len:8 PRP1 0x0 PRP2 0x0 00:23:47.030 [2024-07-15 18:38:25.903096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.030 [2024-07-15 18:38:25.903102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.030 [2024-07-15 18:38:25.903107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.030 [2024-07-15 18:38:25.903112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75552 len:8 PRP1 0x0 PRP2 0x0 00:23:47.030 [2024-07-15 18:38:25.903118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.030 [2024-07-15 18:38:25.903124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.030 [2024-07-15 18:38:25.903129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.030 [2024-07-15 18:38:25.903134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75560 len:8 PRP1 0x0 PRP2 0x0 00:23:47.030 [2024-07-15 18:38:25.903140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.030 [2024-07-15 18:38:25.903146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.030 [2024-07-15 18:38:25.903151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.030 [2024-07-15 18:38:25.903156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75568 len:8 PRP1 0x0 PRP2 0x0 00:23:47.030 [2024-07-15 18:38:25.903162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.030 [2024-07-15 18:38:25.903168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.030 [2024-07-15 18:38:25.903173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.030 [2024-07-15 18:38:25.903178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75576 len:8 PRP1 0x0 PRP2 0x0 00:23:47.030 [2024-07-15 18:38:25.903184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.030 [2024-07-15 18:38:25.903190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.030 [2024-07-15 18:38:25.903195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.030 [2024-07-15 18:38:25.903202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75584 len:8 PRP1 0x0 PRP2 0x0 00:23:47.030 [2024-07-15 18:38:25.903208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.030 [2024-07-15 18:38:25.903215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.030 [2024-07-15 18:38:25.903220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.030 [2024-07-15 18:38:25.903225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75592 len:8 PRP1 0x0 PRP2 0x0 00:23:47.030 [2024-07-15 18:38:25.903230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.030 [2024-07-15 18:38:25.903237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.030 [2024-07-15 18:38:25.903243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.030 [2024-07-15 18:38:25.903247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75600 len:8 PRP1 0x0 PRP2 0x0 00:23:47.030 [2024-07-15 18:38:25.903253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.030 [2024-07-15 18:38:25.903293] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x109bb80 was disconnected and freed. reset controller. 00:23:47.030 [2024-07-15 18:38:25.903301] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:47.030 [2024-07-15 18:38:25.903336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.030 [2024-07-15 18:38:25.903350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.030 [2024-07-15 18:38:25.903360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.030 [2024-07-15 18:38:25.903368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.030 [2024-07-15 18:38:25.903378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.030 [2024-07-15 18:38:25.903386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.030 [2024-07-15 18:38:25.903395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.030 [2024-07-15 18:38:25.903403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.030 [2024-07-15 18:38:25.903411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.030 [2024-07-15 18:38:25.903446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a0540 (9): Bad file descriptor 00:23:47.030 [2024-07-15 18:38:25.907155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.030 [2024-07-15 18:38:25.937630] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:47.030 00:23:47.030 Latency(us) 00:23:47.030 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:47.030 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:47.030 Verification LBA range: start 0x0 length 0x4000 00:23:47.030 NVMe0n1 : 15.01 11482.77 44.85 409.21 0.00 10742.09 407.65 18849.40 00:23:47.030 =================================================================================================================== 00:23:47.030 Total : 11482.77 44.85 409.21 0.00 10742.09 407.65 18849.40 00:23:47.030 Received shutdown signal, test time was about 15.000000 seconds 00:23:47.030 00:23:47.030 Latency(us) 00:23:47.030 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:47.030 =================================================================================================================== 00:23:47.030 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:47.030 18:38:32 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:47.030 18:38:32 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:47.030 18:38:32 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:47.030 18:38:32 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=4014724 00:23:47.030 18:38:32 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:47.030 18:38:32 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 4014724 /var/tmp/bdevperf.sock 00:23:47.030 18:38:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 4014724 ']' 00:23:47.030 18:38:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:47.030 18:38:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:47.030 18:38:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:47.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:47.030 18:38:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:47.030 18:38:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:47.597 18:38:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:47.597 18:38:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:47.597 18:38:33 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:47.854 [2024-07-15 18:38:33.212072] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:47.854 18:38:33 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:47.854 [2024-07-15 18:38:33.392501] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:48.111 18:38:33 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:48.111 NVMe0n1 00:23:48.369 18:38:33 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:48.628 00:23:48.628 18:38:34 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:48.886 00:23:48.886 18:38:34 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:48.886 18:38:34 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:49.145 18:38:34 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:49.145 18:38:34 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:52.454 18:38:37 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:52.454 18:38:37 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:52.454 18:38:37 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:52.454 18:38:37 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=4015650 00:23:52.454 18:38:37 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 4015650 00:23:53.454 0 00:23:53.455 18:38:38 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:53.455 [2024-07-15 18:38:32.261709] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:23:53.455 [2024-07-15 18:38:32.261758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4014724 ] 00:23:53.455 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.455 [2024-07-15 18:38:32.329069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.455 [2024-07-15 18:38:32.396884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.455 [2024-07-15 18:38:34.649508] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:53.455 [2024-07-15 18:38:34.649552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.455 [2024-07-15 18:38:34.649562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.455 [2024-07-15 18:38:34.649571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.455 [2024-07-15 18:38:34.649577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.455 [2024-07-15 18:38:34.649585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.455 [2024-07-15 18:38:34.649591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.455 [2024-07-15 18:38:34.649598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.455 [2024-07-15 18:38:34.649605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.455 [2024-07-15 18:38:34.649612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:53.455 [2024-07-15 18:38:34.649638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:53.455 [2024-07-15 18:38:34.649651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11bf540 (9): Bad file descriptor 00:23:53.455 [2024-07-15 18:38:34.656284] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:53.455 Running I/O for 1 seconds... 00:23:53.455 00:23:53.455 Latency(us) 00:23:53.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.455 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:53.455 Verification LBA range: start 0x0 length 0x4000 00:23:53.455 NVMe0n1 : 1.01 11627.72 45.42 0.00 0.00 10966.54 2340.57 8925.38 00:23:53.455 =================================================================================================================== 00:23:53.455 Total : 11627.72 45.42 0.00 0.00 10966.54 2340.57 8925.38 00:23:53.455 18:38:38 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:53.455 18:38:38 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:53.714 18:38:39 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:53.972 18:38:39 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:53.972 18:38:39 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:53.972 18:38:39 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:54.231 18:38:39 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:57.517 18:38:42 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:57.517 18:38:42 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:57.517 18:38:42 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 4014724 00:23:57.517 18:38:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 4014724 ']' 00:23:57.517 18:38:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 4014724 00:23:57.517 18:38:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:57.517 18:38:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:57.517 18:38:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4014724 00:23:57.517 18:38:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:57.517 18:38:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:57.517 18:38:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4014724' 00:23:57.517 killing process with pid 4014724 00:23:57.517 18:38:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 4014724 00:23:57.517 18:38:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 4014724 00:23:57.775 18:38:43 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:57.775 18:38:43 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:57.775 18:38:43 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:57.775 18:38:43 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:57.775 18:38:43 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:57.775 18:38:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:57.775 18:38:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:23:57.775 18:38:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:57.775 18:38:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:23:57.775 18:38:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:57.775 18:38:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:57.775 rmmod nvme_tcp 00:23:57.775 rmmod nvme_fabrics 00:23:57.775 rmmod nvme_keyring 00:23:58.034 18:38:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:58.034 18:38:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:23:58.034 18:38:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:23:58.034 18:38:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 4011703 ']' 00:23:58.034 18:38:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 4011703 00:23:58.034 18:38:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 4011703 ']' 00:23:58.034 18:38:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 4011703 00:23:58.034 18:38:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:58.034 18:38:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:58.034 18:38:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4011703 00:23:58.034 18:38:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:58.034 18:38:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:58.034 18:38:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4011703' 00:23:58.034 killing process with pid 4011703 00:23:58.034 18:38:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 4011703 00:23:58.034 18:38:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 4011703 00:23:58.293 18:38:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:58.293 18:38:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:58.293 18:38:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:58.293 18:38:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:58.293 18:38:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:58.293 18:38:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.293 18:38:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:58.293 18:38:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.196 18:38:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:00.196 00:24:00.196 real 0m38.232s 00:24:00.196 user 2m2.140s 00:24:00.196 sys 0m7.602s 00:24:00.196 18:38:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:00.196 18:38:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:00.196 ************************************ 00:24:00.196 END TEST nvmf_failover 00:24:00.196 ************************************ 00:24:00.196 18:38:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:00.196 18:38:45 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:00.196 18:38:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:00.196 18:38:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:00.196 18:38:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:00.454 ************************************ 00:24:00.454 START TEST nvmf_host_discovery 00:24:00.454 ************************************ 00:24:00.454 18:38:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:00.454 * Looking for test storage... 00:24:00.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:00.454 18:38:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:00.454 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:00.454 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:00.454 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:00.454 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:00.454 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:00.454 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:00.454 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:00.454 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:00.454 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:00.454 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:00.454 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:00.454 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:00.454 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:00.454 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:00.454 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:00.454 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:00.454 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:00.454 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:00.454 18:38:45 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:00.454 18:38:45 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:00.454 18:38:45 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:00.454 18:38:45 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.454 18:38:45 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.454 18:38:45 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.454 18:38:45 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:00.454 18:38:45 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.454 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:24:00.454 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:00.455 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:00.455 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:00.455 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:00.455 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:00.455 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:00.455 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:00.455 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:00.455 18:38:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:00.455 18:38:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:00.455 18:38:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:00.455 18:38:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:00.455 18:38:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:00.455 18:38:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:00.455 18:38:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:00.455 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:00.455 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:00.455 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:00.455 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:00.455 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:00.455 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.455 18:38:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:00.455 18:38:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.455 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:00.455 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:00.455 18:38:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:24:00.455 18:38:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:07.022 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:07.022 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:07.023 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:07.023 Found net devices under 0000:86:00.0: cvl_0_0 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:07.023 Found net devices under 0000:86:00.1: cvl_0_1 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:07.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:07.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:24:07.023 00:24:07.023 --- 10.0.0.2 ping statistics --- 00:24:07.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.023 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:07.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:07.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:24:07.023 00:24:07.023 --- 10.0.0.1 ping statistics --- 00:24:07.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.023 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=4020085 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 4020085 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 4020085 ']' 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:07.023 18:38:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:07.024 [2024-07-15 18:38:51.672003] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:24:07.024 [2024-07-15 18:38:51.672048] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:07.024 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.024 [2024-07-15 18:38:51.744798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.024 [2024-07-15 18:38:51.821397] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:07.024 [2024-07-15 18:38:51.821431] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:07.024 [2024-07-15 18:38:51.821437] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:07.024 [2024-07-15 18:38:51.821443] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:07.024 [2024-07-15 18:38:51.821448] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:07.024 [2024-07-15 18:38:51.821482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:07.024 [2024-07-15 18:38:52.519505] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:07.024 [2024-07-15 18:38:52.531646] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:07.024 null0 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:07.024 null1 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=4020126 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 4020126 /tmp/host.sock 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 4020126 ']' 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:07.024 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:07.024 18:38:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:07.283 [2024-07-15 18:38:52.608426] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:24:07.283 [2024-07-15 18:38:52.608467] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4020126 ] 00:24:07.283 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.283 [2024-07-15 18:38:52.660007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.283 [2024-07-15 18:38:52.737507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.216 [2024-07-15 18:38:53.766893] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.216 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:24:08.475 18:38:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:24:09.042 [2024-07-15 18:38:54.438513] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:09.042 [2024-07-15 18:38:54.438533] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:09.042 [2024-07-15 18:38:54.438544] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:09.042 [2024-07-15 18:38:54.566943] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:09.299 [2024-07-15 18:38:54.670917] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:09.299 [2024-07-15 18:38:54.670937] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:09.557 18:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:09.557 18:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:09.557 18:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:09.557 18:38:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:09.557 18:38:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:09.557 18:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.557 18:38:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:09.557 18:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.557 18:38:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:09.557 18:38:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.557 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.557 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:09.557 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:09.557 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:09.557 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:09.557 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:09.557 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:09.557 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:09.557 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:09.557 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.557 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.557 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:09.558 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:09.558 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:09.558 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.558 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:09.558 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:09.558 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:09.558 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:09.558 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:09.558 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:09.558 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:09.558 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:09.558 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:09.558 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:09.558 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.558 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:09.558 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.558 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:09.558 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.816 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.817 [2024-07-15 18:38:55.291070] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:09.817 [2024-07-15 18:38:55.291474] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:09.817 [2024-07-15 18:38:55.291497] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.817 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:10.075 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.075 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:10.075 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:10.075 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:10.075 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:10.075 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:10.075 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:10.075 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:10.075 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:10.075 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:10.075 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.075 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.075 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:10.075 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:10.075 18:38:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:10.075 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.075 [2024-07-15 18:38:55.418867] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:10.075 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:10.075 18:38:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:24:10.075 [2024-07-15 18:38:55.484370] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:10.075 [2024-07-15 18:38:55.484388] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:10.075 [2024-07-15 18:38:55.484393] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.010 [2024-07-15 18:38:56.550646] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:11.010 [2024-07-15 18:38:56.550668] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:11.010 [2024-07-15 18:38:56.553328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.010 [2024-07-15 18:38:56.553351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.010 [2024-07-15 18:38:56.553360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.010 [2024-07-15 18:38:56.553368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.010 [2024-07-15 18:38:56.553375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.010 [2024-07-15 18:38:56.553382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.010 [2024-07-15 18:38:56.553389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.010 [2024-07-15 18:38:56.553395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.010 [2024-07-15 18:38:56.553406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22caf10 is same with the state(5) to be set 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.010 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:11.010 [2024-07-15 18:38:56.563345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22caf10 (9): Bad file descriptor 00:24:11.269 [2024-07-15 18:38:56.573383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:11.269 [2024-07-15 18:38:56.573647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.269 [2024-07-15 18:38:56.573660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22caf10 with addr=10.0.0.2, port=4420 00:24:11.269 [2024-07-15 18:38:56.573668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22caf10 is same with the state(5) to be set 00:24:11.269 [2024-07-15 18:38:56.573678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22caf10 (9): Bad file descriptor 00:24:11.269 [2024-07-15 18:38:56.573688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:11.269 [2024-07-15 18:38:56.573694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:11.269 [2024-07-15 18:38:56.573702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:11.269 [2024-07-15 18:38:56.573713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.269 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.269 [2024-07-15 18:38:56.583495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:11.269 [2024-07-15 18:38:56.583759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.269 [2024-07-15 18:38:56.583772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22caf10 with addr=10.0.0.2, port=4420 00:24:11.269 [2024-07-15 18:38:56.583779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22caf10 is same with the state(5) to be set 00:24:11.269 [2024-07-15 18:38:56.583790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22caf10 (9): Bad file descriptor 00:24:11.269 [2024-07-15 18:38:56.583807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:11.269 [2024-07-15 18:38:56.583814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:11.269 [2024-07-15 18:38:56.583820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:11.269 [2024-07-15 18:38:56.583829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.269 [2024-07-15 18:38:56.593545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:11.269 [2024-07-15 18:38:56.593728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.269 [2024-07-15 18:38:56.593739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22caf10 with addr=10.0.0.2, port=4420 00:24:11.269 [2024-07-15 18:38:56.593745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22caf10 is same with the state(5) to be set 00:24:11.269 [2024-07-15 18:38:56.593755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22caf10 (9): Bad file descriptor 00:24:11.269 [2024-07-15 18:38:56.593764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:11.269 [2024-07-15 18:38:56.593770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:11.269 [2024-07-15 18:38:56.593776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:11.269 [2024-07-15 18:38:56.593785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.269 [2024-07-15 18:38:56.603596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:11.269 [2024-07-15 18:38:56.603793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.269 [2024-07-15 18:38:56.603805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22caf10 with addr=10.0.0.2, port=4420 00:24:11.269 [2024-07-15 18:38:56.603812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22caf10 is same with the state(5) to be set 00:24:11.269 [2024-07-15 18:38:56.603822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22caf10 (9): Bad file descriptor 00:24:11.269 [2024-07-15 18:38:56.603830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:11.269 [2024-07-15 18:38:56.603836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:11.269 [2024-07-15 18:38:56.603842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:11.269 [2024-07-15 18:38:56.603851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.269 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.269 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.270 [2024-07-15 18:38:56.613646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:11.270 [2024-07-15 18:38:56.613861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.270 [2024-07-15 18:38:56.613877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22caf10 with addr=10.0.0.2, port=4420 00:24:11.270 [2024-07-15 18:38:56.613885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22caf10 is same with the state(5) to be set 00:24:11.270 [2024-07-15 18:38:56.613895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22caf10 (9): Bad file descriptor 00:24:11.270 [2024-07-15 18:38:56.613912] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:11.270 [2024-07-15 18:38:56.613918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:11.270 [2024-07-15 18:38:56.613924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:11.270 [2024-07-15 18:38:56.613933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.270 [2024-07-15 18:38:56.623696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:11.270 [2024-07-15 18:38:56.623914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.270 [2024-07-15 18:38:56.623926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22caf10 with addr=10.0.0.2, port=4420 00:24:11.270 [2024-07-15 18:38:56.623933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22caf10 is same with the state(5) to be set 00:24:11.270 [2024-07-15 18:38:56.623943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22caf10 (9): Bad file descriptor 00:24:11.270 [2024-07-15 18:38:56.623952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:11.270 [2024-07-15 18:38:56.623958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:11.270 [2024-07-15 18:38:56.623964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:11.270 [2024-07-15 18:38:56.623972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.270 [2024-07-15 18:38:56.633746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:11.270 [2024-07-15 18:38:56.633982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.270 [2024-07-15 18:38:56.633993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22caf10 with addr=10.0.0.2, port=4420 00:24:11.270 [2024-07-15 18:38:56.634000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22caf10 is same with the state(5) to be set 00:24:11.270 [2024-07-15 18:38:56.634009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22caf10 (9): Bad file descriptor 00:24:11.270 [2024-07-15 18:38:56.634018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:11.270 [2024-07-15 18:38:56.634024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:11.270 [2024-07-15 18:38:56.634030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:11.270 [2024-07-15 18:38:56.634038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.270 [2024-07-15 18:38:56.637410] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:11.270 [2024-07-15 18:38:56.637425] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:11.270 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:11.529 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:11.529 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.529 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:11.529 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:11.529 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.529 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.529 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:11.529 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.529 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:24:11.529 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:11.529 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:11.529 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:11.529 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:11.529 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:11.529 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:11.529 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:11.529 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:11.529 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:11.529 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:11.529 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:11.529 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.529 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.529 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.529 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:11.529 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:11.529 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:11.529 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:11.529 18:38:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:11.529 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.529 18:38:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.464 [2024-07-15 18:38:57.977892] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:12.464 [2024-07-15 18:38:57.977910] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:12.464 [2024-07-15 18:38:57.977921] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:12.723 [2024-07-15 18:38:58.107319] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:12.723 [2024-07-15 18:38:58.172715] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:12.723 [2024-07-15 18:38:58.172744] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:12.723 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.723 18:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:12.723 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:12.723 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:12.723 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:12.723 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:12.723 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:12.723 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:12.723 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:12.723 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.723 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.723 request: 00:24:12.723 { 00:24:12.723 "name": "nvme", 00:24:12.723 "trtype": "tcp", 00:24:12.723 "traddr": "10.0.0.2", 00:24:12.723 "adrfam": "ipv4", 00:24:12.723 "trsvcid": "8009", 00:24:12.723 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:12.723 "wait_for_attach": true, 00:24:12.723 "method": "bdev_nvme_start_discovery", 00:24:12.723 "req_id": 1 00:24:12.723 } 00:24:12.723 Got JSON-RPC error response 00:24:12.723 response: 00:24:12.723 { 00:24:12.723 "code": -17, 00:24:12.723 "message": "File exists" 00:24:12.723 } 00:24:12.723 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:12.723 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:12.723 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:12.723 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:12.723 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:12.723 18:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:12.723 18:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:12.723 18:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:12.723 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.724 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.724 18:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:12.724 18:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:12.724 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.724 18:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:12.724 18:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:12.724 18:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:12.724 18:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:12.724 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.724 18:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:12.724 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.724 18:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:12.724 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.982 18:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.983 request: 00:24:12.983 { 00:24:12.983 "name": "nvme_second", 00:24:12.983 "trtype": "tcp", 00:24:12.983 "traddr": "10.0.0.2", 00:24:12.983 "adrfam": "ipv4", 00:24:12.983 "trsvcid": "8009", 00:24:12.983 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:12.983 "wait_for_attach": true, 00:24:12.983 "method": "bdev_nvme_start_discovery", 00:24:12.983 "req_id": 1 00:24:12.983 } 00:24:12.983 Got JSON-RPC error response 00:24:12.983 response: 00:24:12.983 { 00:24:12.983 "code": -17, 00:24:12.983 "message": "File exists" 00:24:12.983 } 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.983 18:38:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.918 [2024-07-15 18:38:59.412803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:13.918 [2024-07-15 18:38:59.412831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e4880 with addr=10.0.0.2, port=8010 00:24:13.918 [2024-07-15 18:38:59.412845] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:13.918 [2024-07-15 18:38:59.412852] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:13.918 [2024-07-15 18:38:59.412857] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:15.295 [2024-07-15 18:39:00.415208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.295 [2024-07-15 18:39:00.415233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2307a00 with addr=10.0.0.2, port=8010 00:24:15.295 [2024-07-15 18:39:00.415247] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:15.295 [2024-07-15 18:39:00.415254] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:15.295 [2024-07-15 18:39:00.415260] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:15.861 [2024-07-15 18:39:01.417427] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:16.120 request: 00:24:16.120 { 00:24:16.120 "name": "nvme_second", 00:24:16.120 "trtype": "tcp", 00:24:16.120 "traddr": "10.0.0.2", 00:24:16.120 "adrfam": "ipv4", 00:24:16.120 "trsvcid": "8010", 00:24:16.120 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:16.120 "wait_for_attach": false, 00:24:16.120 "attach_timeout_ms": 3000, 00:24:16.120 "method": "bdev_nvme_start_discovery", 00:24:16.120 "req_id": 1 00:24:16.120 } 00:24:16.120 Got JSON-RPC error response 00:24:16.120 response: 00:24:16.120 { 00:24:16.120 "code": -110, 00:24:16.120 "message": "Connection timed out" 00:24:16.120 } 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 4020126 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:16.120 rmmod nvme_tcp 00:24:16.120 rmmod nvme_fabrics 00:24:16.120 rmmod nvme_keyring 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 4020085 ']' 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 4020085 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 4020085 ']' 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 4020085 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4020085 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4020085' 00:24:16.120 killing process with pid 4020085 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 4020085 00:24:16.120 18:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 4020085 00:24:16.379 18:39:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:16.379 18:39:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:16.379 18:39:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:16.379 18:39:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:16.379 18:39:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:16.379 18:39:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.379 18:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:16.379 18:39:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.282 18:39:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:18.282 00:24:18.282 real 0m18.055s 00:24:18.282 user 0m22.344s 00:24:18.282 sys 0m5.710s 00:24:18.282 18:39:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:18.282 18:39:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.282 ************************************ 00:24:18.282 END TEST nvmf_host_discovery 00:24:18.282 ************************************ 00:24:18.542 18:39:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:18.542 18:39:03 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:18.542 18:39:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:18.542 18:39:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:18.542 18:39:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:18.542 ************************************ 00:24:18.542 START TEST nvmf_host_multipath_status 00:24:18.542 ************************************ 00:24:18.542 18:39:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:18.542 * Looking for test storage... 00:24:18.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:18.542 18:39:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:18.542 18:39:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:18.542 18:39:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:18.542 18:39:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:18.542 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:18.542 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:18.542 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:18.542 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:18.542 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:18.542 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:18.542 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:18.542 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:18.542 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:18.542 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:18.542 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:18.542 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:18.542 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:18.542 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:18.542 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:18.542 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:18.542 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:18.542 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:18.542 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.543 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.543 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.543 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:18.543 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.543 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:24:18.543 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:18.543 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:18.543 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:18.543 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:18.543 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:18.543 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:18.543 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:18.543 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:18.543 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:18.543 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:18.543 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:18.543 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:18.543 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:18.543 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:18.543 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:18.543 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:18.543 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:18.543 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:18.543 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:18.543 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:18.543 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.543 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:18.543 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.543 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:18.543 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:18.543 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:24:18.543 18:39:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:25.107 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:25.107 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:25.107 Found net devices under 0000:86:00.0: cvl_0_0 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:25.107 Found net devices under 0000:86:00.1: cvl_0_1 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:25.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:25.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:24:25.107 00:24:25.107 --- 10.0.0.2 ping statistics --- 00:24:25.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.107 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:25.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:25.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:24:25.107 00:24:25.107 --- 10.0.0.1 ping statistics --- 00:24:25.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.107 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=4025670 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 4025670 00:24:25.107 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:25.108 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 4025670 ']' 00:24:25.108 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.108 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:25.108 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:25.108 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:25.108 18:39:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:25.108 [2024-07-15 18:39:09.789216] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:24:25.108 [2024-07-15 18:39:09.789260] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:25.108 EAL: No free 2048 kB hugepages reported on node 1 00:24:25.108 [2024-07-15 18:39:09.860837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:25.108 [2024-07-15 18:39:09.939593] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:25.108 [2024-07-15 18:39:09.939642] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:25.108 [2024-07-15 18:39:09.939649] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:25.108 [2024-07-15 18:39:09.939655] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:25.108 [2024-07-15 18:39:09.939660] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:25.108 [2024-07-15 18:39:09.939729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.108 [2024-07-15 18:39:09.939729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:25.108 18:39:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:25.108 18:39:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:25.108 18:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:25.108 18:39:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:25.108 18:39:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:25.108 18:39:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:25.108 18:39:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=4025670 00:24:25.108 18:39:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:25.366 [2024-07-15 18:39:10.783917] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:25.366 18:39:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:25.624 Malloc0 00:24:25.624 18:39:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:25.624 18:39:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:25.881 18:39:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:26.139 [2024-07-15 18:39:11.509513] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:26.139 18:39:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:26.139 [2024-07-15 18:39:11.689960] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:26.398 18:39:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=4026189 00:24:26.398 18:39:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:26.398 18:39:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:26.398 18:39:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 4026189 /var/tmp/bdevperf.sock 00:24:26.398 18:39:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 4026189 ']' 00:24:26.398 18:39:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:26.398 18:39:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:26.398 18:39:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:26.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:26.398 18:39:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:26.398 18:39:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:27.332 18:39:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:27.332 18:39:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:27.332 18:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:27.332 18:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:27.921 Nvme0n1 00:24:27.921 18:39:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:28.178 Nvme0n1 00:24:28.178 18:39:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:28.178 18:39:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:30.081 18:39:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:30.081 18:39:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:30.339 18:39:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:30.597 18:39:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:31.532 18:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:31.532 18:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:31.532 18:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.532 18:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:31.790 18:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:31.790 18:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:31.790 18:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.790 18:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:32.048 18:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:32.048 18:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:32.048 18:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.048 18:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:32.048 18:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.048 18:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:32.048 18:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.048 18:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:32.307 18:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.307 18:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:32.307 18:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.307 18:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:32.566 18:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.566 18:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:32.566 18:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.566 18:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:32.825 18:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.825 18:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:32.825 18:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:32.825 18:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:33.084 18:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:34.021 18:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:34.021 18:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:34.021 18:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.021 18:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:34.280 18:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:34.280 18:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:34.280 18:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.280 18:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:34.538 18:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.538 18:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:34.538 18:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.538 18:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:34.538 18:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.538 18:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:34.538 18:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.538 18:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:34.796 18:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.796 18:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:34.796 18:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:34.796 18:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.055 18:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.055 18:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:35.055 18:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:35.055 18:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.313 18:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.313 18:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:35.313 18:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:35.313 18:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:35.571 18:39:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:36.507 18:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:36.507 18:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:36.507 18:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.507 18:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:36.766 18:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.766 18:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:36.766 18:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.766 18:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:37.024 18:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:37.024 18:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:37.025 18:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:37.025 18:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.283 18:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.283 18:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:37.283 18:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.283 18:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:37.283 18:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.283 18:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:37.283 18:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.283 18:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:37.543 18:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.543 18:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:37.543 18:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:37.543 18:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.803 18:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.803 18:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:37.803 18:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:38.061 18:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:38.061 18:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:39.437 18:39:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:39.437 18:39:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:39.437 18:39:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.437 18:39:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:39.437 18:39:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.437 18:39:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:39.437 18:39:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.437 18:39:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:39.438 18:39:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:39.438 18:39:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:39.438 18:39:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.438 18:39:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:39.696 18:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.696 18:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:39.696 18:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:39.696 18:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.954 18:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.954 18:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:39.954 18:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.954 18:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:40.213 18:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.213 18:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:40.213 18:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.213 18:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:40.213 18:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:40.213 18:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:40.213 18:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:40.472 18:39:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:40.731 18:39:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:41.665 18:39:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:41.665 18:39:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:41.665 18:39:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.665 18:39:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:41.923 18:39:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:41.923 18:39:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:41.923 18:39:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.923 18:39:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:41.923 18:39:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:41.923 18:39:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:42.181 18:39:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.181 18:39:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:42.181 18:39:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.181 18:39:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:42.181 18:39:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.181 18:39:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:42.440 18:39:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.440 18:39:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:42.440 18:39:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.440 18:39:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:42.698 18:39:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:42.698 18:39:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:42.698 18:39:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.698 18:39:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:42.698 18:39:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:42.698 18:39:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:42.698 18:39:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:42.956 18:39:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:43.214 18:39:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:44.149 18:39:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:44.149 18:39:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:44.149 18:39:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.149 18:39:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:44.407 18:39:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:44.407 18:39:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:44.407 18:39:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.407 18:39:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:44.407 18:39:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.407 18:39:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:44.407 18:39:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.407 18:39:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:44.665 18:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.665 18:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:44.665 18:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.665 18:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:44.923 18:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.923 18:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:44.923 18:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.923 18:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:45.182 18:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:45.182 18:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:45.182 18:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.182 18:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:45.182 18:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.182 18:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:45.441 18:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:45.441 18:39:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:45.700 18:39:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:45.959 18:39:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:46.894 18:39:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:46.894 18:39:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:46.894 18:39:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.894 18:39:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:47.153 18:39:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.153 18:39:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:47.153 18:39:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.153 18:39:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:47.153 18:39:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.153 18:39:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:47.153 18:39:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.153 18:39:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:47.411 18:39:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.411 18:39:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:47.411 18:39:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.411 18:39:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:47.670 18:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.670 18:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:47.670 18:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.670 18:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:47.927 18:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.927 18:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:47.927 18:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:47.927 18:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.927 18:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.927 18:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:47.927 18:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:48.185 18:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:48.444 18:39:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:49.380 18:39:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:49.380 18:39:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:49.380 18:39:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.380 18:39:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:49.638 18:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:49.638 18:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:49.638 18:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:49.638 18:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.896 18:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.896 18:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:49.896 18:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:49.896 18:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.896 18:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.896 18:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:49.896 18:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.896 18:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:50.153 18:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.153 18:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:50.153 18:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.153 18:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:50.419 18:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.419 18:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:50.419 18:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.419 18:39:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:50.679 18:39:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.679 18:39:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:50.679 18:39:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:50.679 18:39:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:50.936 18:39:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:51.870 18:39:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:51.870 18:39:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:51.870 18:39:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.870 18:39:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:52.128 18:39:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.128 18:39:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:52.128 18:39:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:52.128 18:39:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.386 18:39:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.386 18:39:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:52.386 18:39:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.386 18:39:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:52.644 18:39:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.644 18:39:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:52.644 18:39:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:52.644 18:39:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.644 18:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.644 18:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:52.644 18:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.644 18:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:52.942 18:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.942 18:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:52.942 18:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.942 18:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:53.223 18:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:53.223 18:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:53.223 18:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:53.223 18:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:53.482 18:39:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:54.417 18:39:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:54.417 18:39:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:54.417 18:39:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.417 18:39:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:54.675 18:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.675 18:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:54.675 18:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.675 18:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:54.933 18:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:54.933 18:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:54.933 18:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.933 18:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:55.192 18:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.192 18:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:55.192 18:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.192 18:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:55.192 18:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.192 18:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:55.192 18:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.192 18:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:55.450 18:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.450 18:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:55.450 18:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.450 18:39:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:55.709 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:55.709 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 4026189 00:24:55.709 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 4026189 ']' 00:24:55.709 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 4026189 00:24:55.709 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:24:55.709 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:55.709 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4026189 00:24:55.709 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:55.709 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:55.709 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4026189' 00:24:55.709 killing process with pid 4026189 00:24:55.709 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 4026189 00:24:55.709 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 4026189 00:24:55.709 Connection closed with partial response: 00:24:55.710 00:24:55.710 00:24:55.983 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 4026189 00:24:55.983 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:55.983 [2024-07-15 18:39:11.763436] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:24:55.983 [2024-07-15 18:39:11.763487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4026189 ] 00:24:55.983 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.983 [2024-07-15 18:39:11.828209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.983 [2024-07-15 18:39:11.900929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.983 Running I/O for 90 seconds... 00:24:55.983 [2024-07-15 18:39:25.914189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:93192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-07-15 18:39:25.914229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:55.983 [2024-07-15 18:39:25.914266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-07-15 18:39:25.914275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:55.983 [2024-07-15 18:39:25.914288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-07-15 18:39:25.914295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:55.983 [2024-07-15 18:39:25.914308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-07-15 18:39:25.914314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:55.983 [2024-07-15 18:39:25.914327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-07-15 18:39:25.914335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:55.983 [2024-07-15 18:39:25.914354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-07-15 18:39:25.914361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:55.983 [2024-07-15 18:39:25.914373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-07-15 18:39:25.914380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:55.983 [2024-07-15 18:39:25.914392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-07-15 18:39:25.914399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:55.983 [2024-07-15 18:39:25.914437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-07-15 18:39:25.914446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:55.983 [2024-07-15 18:39:25.915073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-07-15 18:39:25.915088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:55.983 [2024-07-15 18:39:25.915103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-07-15 18:39:25.915116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:55.983 [2024-07-15 18:39:25.915129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-07-15 18:39:25.915137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:55.983 [2024-07-15 18:39:25.915149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-07-15 18:39:25.915157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:55.983 [2024-07-15 18:39:25.915170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-07-15 18:39:25.915176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:55.983 [2024-07-15 18:39:25.915189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-07-15 18:39:25.915196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:55.983 [2024-07-15 18:39:25.915208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-07-15 18:39:25.915216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:55.983 [2024-07-15 18:39:25.915228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-07-15 18:39:25.915235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:55.983 [2024-07-15 18:39:25.915247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-07-15 18:39:25.915254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:55.983 [2024-07-15 18:39:25.915267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-07-15 18:39:25.915274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:55.983 [2024-07-15 18:39:25.915286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-07-15 18:39:25.915293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:55.983 [2024-07-15 18:39:25.915306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-07-15 18:39:25.915313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:55.983 [2024-07-15 18:39:25.915327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-07-15 18:39:25.915333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:55.983 [2024-07-15 18:39:25.915352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-07-15 18:39:25.915359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:55.983 [2024-07-15 18:39:25.915374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-07-15 18:39:25.915381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:55.983 [2024-07-15 18:39:25.915395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-07-15 18:39:25.915402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:55.983 [2024-07-15 18:39:25.915415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:93392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-07-15 18:39:25.915422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:55.983 [2024-07-15 18:39:25.915434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:93400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-07-15 18:39:25.915441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:55.983 [2024-07-15 18:39:25.915454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-07-15 18:39:25.915460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:55.983 [2024-07-15 18:39:25.915473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-07-15 18:39:25.915479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:55.983 [2024-07-15 18:39:25.915491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:93424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-07-15 18:39:25.915498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:55.983 [2024-07-15 18:39:25.915511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:93432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-07-15 18:39:25.915518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.915530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:93440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.915537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.915549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:93448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.915556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.915569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:93456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.915575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.915588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:93464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.915594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.915609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:93472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.915616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.915628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:93480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.915635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.915648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:93488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.915654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.915668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:93496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.915674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.915687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:93504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.915694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.915706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:93512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.915712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.915726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:93520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.915732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.915745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:93528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.915751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.915828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.915837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.915852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:93544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.915859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.915874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:93552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.915881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.915895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:93560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.915902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.915916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:93568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.915925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.915939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.915946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.915961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.915967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.915982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.915989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.916004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:93600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.916011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.916025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:93608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.916032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.916048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.916055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.916070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:93624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.916076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.916091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:93632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.916098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.916112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:93640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.916119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.916134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:93648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.916140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.916156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.916163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.916177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:93664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.916191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.916205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:93672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.916212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.916227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:93680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.916233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.916248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:93688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.916255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.916270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.916276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.916291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:93704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.916297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.916312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:93712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.916318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.916333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:93720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.916344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:55.984 [2024-07-15 18:39:25.916359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:93728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-07-15 18:39:25.916365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.916380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:93736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.916388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.916402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:93744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.916409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.916423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:93752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.916431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.916445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.916452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.916468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:93768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.916475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.916572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:93776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.916580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.916598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:93784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.916605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.916621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:93792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.916629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.916646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:93800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.916653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.916669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.916676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.916692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.916699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.916716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:93824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.916722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.916739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.916745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.916761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.916768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.916785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:93848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.916791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.916807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.916814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.916832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:93864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.916839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.916855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:93872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.916862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.916879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.916886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.916902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:93888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.916908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.916924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.916931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.916948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:93904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.916954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.917008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:93912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.917016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.917036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:93920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.917043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.917061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:93928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.917067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.917085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:93936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.917092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.917109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:93944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.917116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.917133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:93952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.917140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.917158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.917166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.917184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:93968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.917192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.917210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.917216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.917234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:93984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.917240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.917257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:93992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.917265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.917284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.917291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.917309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.917316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.917334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.917346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.917364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.917371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.917389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-07-15 18:39:25.917396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:55.985 [2024-07-15 18:39:25.917414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-07-15 18:39:25.917421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:25.917441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-07-15 18:39:25.917449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:25.917467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-07-15 18:39:25.917475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:25.917493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-07-15 18:39:25.917499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:25.917518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-07-15 18:39:25.917525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:25.917542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-07-15 18:39:25.917548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:25.917566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-07-15 18:39:25.917573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:25.917591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-07-15 18:39:25.917597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:25.917615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-07-15 18:39:25.917622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:25.917639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-07-15 18:39:25.917646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:25.917664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-07-15 18:39:25.917671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:25.917689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-07-15 18:39:25.917696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:25.917713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-07-15 18:39:25.917720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:25.917739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-07-15 18:39:25.917746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:25.917763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.986 [2024-07-15 18:39:25.917770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:25.917789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.986 [2024-07-15 18:39:25.917796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:25.917814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.986 [2024-07-15 18:39:25.917821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:25.917840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.986 [2024-07-15 18:39:25.917847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:25.917865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.986 [2024-07-15 18:39:25.917871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:25.917889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.986 [2024-07-15 18:39:25.917896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:25.917914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:93184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.986 [2024-07-15 18:39:25.917921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:38.856480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-07-15 18:39:38.856520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:38.856541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-07-15 18:39:38.856548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:38.856561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-07-15 18:39:38.856567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:38.856579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-07-15 18:39:38.856586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:38.856597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-07-15 18:39:38.856604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:38.856616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-07-15 18:39:38.856622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:38.856641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-07-15 18:39:38.856648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:38.856660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-07-15 18:39:38.856667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:38.856678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-07-15 18:39:38.856684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:38.856696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-07-15 18:39:38.856703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:38.856715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-07-15 18:39:38.856722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:38.856734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-07-15 18:39:38.856740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:38.856753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-07-15 18:39:38.856760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:38.856773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-07-15 18:39:38.856780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:38.856792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-07-15 18:39:38.856798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:38.856810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-07-15 18:39:38.856817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:55.986 [2024-07-15 18:39:38.856830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-07-15 18:39:38.856838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:55.987 [2024-07-15 18:39:38.856851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-07-15 18:39:38.856858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:55.987 [2024-07-15 18:39:38.856870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-07-15 18:39:38.856880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:55.987 [2024-07-15 18:39:38.856891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-07-15 18:39:38.856899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:55.987 [2024-07-15 18:39:38.856912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-07-15 18:39:38.856918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:55.987 [2024-07-15 18:39:38.856931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-07-15 18:39:38.856938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:55.987 [2024-07-15 18:39:38.856950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-07-15 18:39:38.856957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:55.987 [2024-07-15 18:39:38.856969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-07-15 18:39:38.856975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:55.987 [2024-07-15 18:39:38.856987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-07-15 18:39:38.856994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:55.987 [2024-07-15 18:39:38.857006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-07-15 18:39:38.857012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:55.987 [2024-07-15 18:39:38.857024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-07-15 18:39:38.857031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:55.987 [2024-07-15 18:39:38.857042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-07-15 18:39:38.857049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:55.987 [2024-07-15 18:39:38.857060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-07-15 18:39:38.857067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:55.987 [2024-07-15 18:39:38.857078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-07-15 18:39:38.857085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:55.987 [2024-07-15 18:39:38.857096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-07-15 18:39:38.857104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:55.987 [2024-07-15 18:39:38.857116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-07-15 18:39:38.857122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:55.987 [2024-07-15 18:39:38.857134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-07-15 18:39:38.857140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:55.987 [2024-07-15 18:39:38.857152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-07-15 18:39:38.857159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:55.987 [2024-07-15 18:39:38.857554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-07-15 18:39:38.857566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:55.987 [2024-07-15 18:39:38.857581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-07-15 18:39:38.857588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:55.987 [2024-07-15 18:39:38.857600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-07-15 18:39:38.857607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:55.987 [2024-07-15 18:39:38.857619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-07-15 18:39:38.857626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:55.987 [2024-07-15 18:39:38.857638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-07-15 18:39:38.857645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:55.987 [2024-07-15 18:39:38.857658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.987 [2024-07-15 18:39:38.857664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:55.987 [2024-07-15 18:39:38.857676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.987 [2024-07-15 18:39:38.857683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.857695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.988 [2024-07-15 18:39:38.857702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.857714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.988 [2024-07-15 18:39:38.857720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.857735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.988 [2024-07-15 18:39:38.857742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.857754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.988 [2024-07-15 18:39:38.857761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.857772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.988 [2024-07-15 18:39:38.857779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.857791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.988 [2024-07-15 18:39:38.857798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.857810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.988 [2024-07-15 18:39:38.857817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.857828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.988 [2024-07-15 18:39:38.857835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.857847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.988 [2024-07-15 18:39:38.857855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.857867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.988 [2024-07-15 18:39:38.857873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.858020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.988 [2024-07-15 18:39:38.858029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.858043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.988 [2024-07-15 18:39:38.858050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.858061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.988 [2024-07-15 18:39:38.858068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.858080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.988 [2024-07-15 18:39:38.858087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.858101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.988 [2024-07-15 18:39:38.858108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.858119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.988 [2024-07-15 18:39:38.858127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.858140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.988 [2024-07-15 18:39:38.858147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.858159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.988 [2024-07-15 18:39:38.858165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.858177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.988 [2024-07-15 18:39:38.858184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.858197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.988 [2024-07-15 18:39:38.858203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.858215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.988 [2024-07-15 18:39:38.858221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.858233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.988 [2024-07-15 18:39:38.858240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.858252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.988 [2024-07-15 18:39:38.858259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.859110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-07-15 18:39:38.859126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.859141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-07-15 18:39:38.859147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.859159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-07-15 18:39:38.859166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.859178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-07-15 18:39:38.859188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.859199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-07-15 18:39:38.859206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.859218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-07-15 18:39:38.859224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.859236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-07-15 18:39:38.859243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.859254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-07-15 18:39:38.859260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.859272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-07-15 18:39:38.859279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.859291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-07-15 18:39:38.859297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.859308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-07-15 18:39:38.859315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.859326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-07-15 18:39:38.859333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.859350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-07-15 18:39:38.859359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.859371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-07-15 18:39:38.859377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.859389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-07-15 18:39:38.859396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.859408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-07-15 18:39:38.859416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.859429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-07-15 18:39:38.859435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.859447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-07-15 18:39:38.859454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.859466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-07-15 18:39:38.859472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.859484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.988 [2024-07-15 18:39:38.859490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.859502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.988 [2024-07-15 18:39:38.859509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.859521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.988 [2024-07-15 18:39:38.859527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.859539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.988 [2024-07-15 18:39:38.859545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:55.988 [2024-07-15 18:39:38.859557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.988 [2024-07-15 18:39:38.859564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.859576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.859582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.859594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.859600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.859613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.859620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.859631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.859638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.859651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.859657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.859669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.859676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.859688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.859695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.860031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.860054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.860073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.860091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.860110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.860129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.860148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.860166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.860185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.860206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.860226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.860245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.860265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.860283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.860301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.860321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.860345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.860364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.860390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.860426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-07-15 18:39:38.860626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-07-15 18:39:38.860648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-07-15 18:39:38.860667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-07-15 18:39:38.860686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-07-15 18:39:38.860704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-07-15 18:39:38.860723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-07-15 18:39:38.860741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-07-15 18:39:38.860759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-07-15 18:39:38.860778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.860797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.860815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.860835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.860854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.860874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.860888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.860896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.861035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.861045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.861058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.861065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.861077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.861084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.861096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.861104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.861116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.861123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.861135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-07-15 18:39:38.861141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:55.989 [2024-07-15 18:39:38.861154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.990 [2024-07-15 18:39:38.861161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.861173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.990 [2024-07-15 18:39:38.861181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.861194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.990 [2024-07-15 18:39:38.861200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.861213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.990 [2024-07-15 18:39:38.861220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.863842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-07-15 18:39:38.863865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.863882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-07-15 18:39:38.863890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.863903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-07-15 18:39:38.863909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.863921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-07-15 18:39:38.863928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.863941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.990 [2024-07-15 18:39:38.863948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.863960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.990 [2024-07-15 18:39:38.863967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.863979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.990 [2024-07-15 18:39:38.863986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.863998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-07-15 18:39:38.864005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.864017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-07-15 18:39:38.864023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.864035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-07-15 18:39:38.864042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.864055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-07-15 18:39:38.864062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.864074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-07-15 18:39:38.864081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.864093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-07-15 18:39:38.864101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.864116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-07-15 18:39:38.864122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.864134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-07-15 18:39:38.864141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.864152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-07-15 18:39:38.864160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.864172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-07-15 18:39:38.864178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.864190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-07-15 18:39:38.864197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.864209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-07-15 18:39:38.864216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.864228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.990 [2024-07-15 18:39:38.864235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.864247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.990 [2024-07-15 18:39:38.864253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.864265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.990 [2024-07-15 18:39:38.864273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.864285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.990 [2024-07-15 18:39:38.864292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.864304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.990 [2024-07-15 18:39:38.864310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.864322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.990 [2024-07-15 18:39:38.864329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.864348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.990 [2024-07-15 18:39:38.864355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.864367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.990 [2024-07-15 18:39:38.864378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.864391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.990 [2024-07-15 18:39:38.864398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.864410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.990 [2024-07-15 18:39:38.864417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.864429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.990 [2024-07-15 18:39:38.864436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.864449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.990 [2024-07-15 18:39:38.864455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.864467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.990 [2024-07-15 18:39:38.864473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.864485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.990 [2024-07-15 18:39:38.864492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.864505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.990 [2024-07-15 18:39:38.864512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.866017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-07-15 18:39:38.866037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.866053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-07-15 18:39:38.866060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.866072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-07-15 18:39:38.866079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.866093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-07-15 18:39:38.866103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.866115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-07-15 18:39:38.866122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.866134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-07-15 18:39:38.866141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.866154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-07-15 18:39:38.866161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.866173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-07-15 18:39:38.866180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.866192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-07-15 18:39:38.866199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.866211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-07-15 18:39:38.866218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.866230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-07-15 18:39:38.866236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:55.990 [2024-07-15 18:39:38.866249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-07-15 18:39:38.866256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:55.991 [2024-07-15 18:39:38.866268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-07-15 18:39:38.866275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:55.991 [2024-07-15 18:39:38.866287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-07-15 18:39:38.866293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:55.991 [2024-07-15 18:39:38.866305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-07-15 18:39:38.866313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:55.991 [2024-07-15 18:39:38.866325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-07-15 18:39:38.866335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:55.991 [2024-07-15 18:39:38.866354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.991 [2024-07-15 18:39:38.866361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:55.991 [2024-07-15 18:39:38.866373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.991 [2024-07-15 18:39:38.866380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:55.991 [2024-07-15 18:39:38.866393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.991 [2024-07-15 18:39:38.866400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:55.991 [2024-07-15 18:39:38.866417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.991 [2024-07-15 18:39:38.866424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:55.991 [2024-07-15 18:39:38.866437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.991 [2024-07-15 18:39:38.866443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:55.991 [2024-07-15 18:39:38.866455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-07-15 18:39:38.866462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:55.991 [2024-07-15 18:39:38.866474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-07-15 18:39:38.866481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:55.991 [2024-07-15 18:39:38.866493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.991 [2024-07-15 18:39:38.866499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:55.991 [2024-07-15 18:39:38.866511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-07-15 18:39:38.866518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:55.991 [2024-07-15 18:39:38.866530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-07-15 18:39:38.866537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:55.991 [2024-07-15 18:39:38.866549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-07-15 18:39:38.866556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:55.991 [2024-07-15 18:39:38.866568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-07-15 18:39:38.866574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:55.991 [2024-07-15 18:39:38.866588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-07-15 18:39:38.866595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:55.991 [2024-07-15 18:39:38.866847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-07-15 18:39:38.866857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:55.991 [2024-07-15 18:39:38.866871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.991 [2024-07-15 18:39:38.866878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:55.991 [2024-07-15 18:39:38.866890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.991 [2024-07-15 18:39:38.866897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.991 [2024-07-15 18:39:38.866910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.991 [2024-07-15 18:39:38.866916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.866929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.992 [2024-07-15 18:39:38.866935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.866947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.992 [2024-07-15 18:39:38.866953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.866968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.992 [2024-07-15 18:39:38.866975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.866987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.992 [2024-07-15 18:39:38.866995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.867008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.992 [2024-07-15 18:39:38.867015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.867027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.992 [2024-07-15 18:39:38.867033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.867045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.992 [2024-07-15 18:39:38.867052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.867066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-07-15 18:39:38.867073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.867085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-07-15 18:39:38.867092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.867103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-07-15 18:39:38.867110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.867122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-07-15 18:39:38.867129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.867141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-07-15 18:39:38.867148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.867159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-07-15 18:39:38.867165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.867178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-07-15 18:39:38.867185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.868240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-07-15 18:39:38.868257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.868271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-07-15 18:39:38.868279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.868291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-07-15 18:39:38.868298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.868310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.992 [2024-07-15 18:39:38.868317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.868331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.992 [2024-07-15 18:39:38.868343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.868356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.992 [2024-07-15 18:39:38.868366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.868378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.992 [2024-07-15 18:39:38.868385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.868398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.992 [2024-07-15 18:39:38.868405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.868417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.992 [2024-07-15 18:39:38.868424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.868436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-07-15 18:39:38.868443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.868456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-07-15 18:39:38.868463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.868475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-07-15 18:39:38.868482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.868494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-07-15 18:39:38.868501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.868513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-07-15 18:39:38.868520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.868532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-07-15 18:39:38.868539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.868551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-07-15 18:39:38.868558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.868570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-07-15 18:39:38.868577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.868588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.992 [2024-07-15 18:39:38.868597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.868609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.992 [2024-07-15 18:39:38.868616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.868629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-07-15 18:39:38.868635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.868648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.992 [2024-07-15 18:39:38.868654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.868667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-07-15 18:39:38.868674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.868687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-07-15 18:39:38.868693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:55.992 [2024-07-15 18:39:38.869980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-07-15 18:39:38.869998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.870014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-07-15 18:39:38.870021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.870033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-07-15 18:39:38.870041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.870053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-07-15 18:39:38.870060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.870072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-07-15 18:39:38.870079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.870091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-07-15 18:39:38.870097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.870110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-07-15 18:39:38.870118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.870133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.993 [2024-07-15 18:39:38.870140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.870153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.993 [2024-07-15 18:39:38.870160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.870173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.993 [2024-07-15 18:39:38.870179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.870191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.993 [2024-07-15 18:39:38.870198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.870211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.993 [2024-07-15 18:39:38.870217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.870229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.993 [2024-07-15 18:39:38.870236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.870248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:64488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.993 [2024-07-15 18:39:38.870255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.870268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.993 [2024-07-15 18:39:38.870274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.870286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.993 [2024-07-15 18:39:38.870293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.870305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.993 [2024-07-15 18:39:38.870312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.870324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.993 [2024-07-15 18:39:38.878793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.878812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.993 [2024-07-15 18:39:38.878821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.878839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.993 [2024-07-15 18:39:38.878848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.878864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.993 [2024-07-15 18:39:38.878873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.878888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.993 [2024-07-15 18:39:38.878897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.878913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.993 [2024-07-15 18:39:38.878923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.878939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.993 [2024-07-15 18:39:38.878947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.878964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-07-15 18:39:38.878972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.878988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-07-15 18:39:38.878997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.879012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-07-15 18:39:38.879021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.879037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-07-15 18:39:38.879046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.879061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-07-15 18:39:38.879070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.879086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-07-15 18:39:38.879095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.879111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.993 [2024-07-15 18:39:38.879120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.879138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.993 [2024-07-15 18:39:38.879147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.879163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.993 [2024-07-15 18:39:38.879172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.879187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-07-15 18:39:38.879196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.879212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-07-15 18:39:38.879220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.879235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-07-15 18:39:38.879244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.879259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-07-15 18:39:38.879268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.879284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.993 [2024-07-15 18:39:38.879292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.879308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.993 [2024-07-15 18:39:38.879317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.879334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-07-15 18:39:38.879499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.880561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.993 [2024-07-15 18:39:38.880582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.880601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.993 [2024-07-15 18:39:38.880611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.880626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.993 [2024-07-15 18:39:38.880636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.880651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-07-15 18:39:38.880664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.880680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-07-15 18:39:38.880689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:55.993 [2024-07-15 18:39:38.880707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-07-15 18:39:38.880716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.880733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-07-15 18:39:38.880742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.880757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-07-15 18:39:38.880766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.880782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-07-15 18:39:38.880791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.880807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-07-15 18:39:38.880816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.880831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-07-15 18:39:38.880841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.880856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-07-15 18:39:38.880865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.880883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.994 [2024-07-15 18:39:38.880891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.881199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.994 [2024-07-15 18:39:38.881214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.881231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.994 [2024-07-15 18:39:38.881240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.881257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.994 [2024-07-15 18:39:38.881269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.881285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.994 [2024-07-15 18:39:38.881294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.881311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.994 [2024-07-15 18:39:38.881320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.881342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.994 [2024-07-15 18:39:38.881352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.881367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.994 [2024-07-15 18:39:38.881377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.881393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-07-15 18:39:38.881401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.881418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-07-15 18:39:38.881427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.881443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-07-15 18:39:38.881452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.881467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.994 [2024-07-15 18:39:38.881477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.881493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.994 [2024-07-15 18:39:38.881501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.881517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.994 [2024-07-15 18:39:38.881526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.881541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.994 [2024-07-15 18:39:38.881550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.881566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.994 [2024-07-15 18:39:38.881574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.881593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.994 [2024-07-15 18:39:38.881601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.881617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.994 [2024-07-15 18:39:38.881626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.881642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.994 [2024-07-15 18:39:38.881652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.881667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.994 [2024-07-15 18:39:38.881676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.881691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-07-15 18:39:38.881700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.881716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-07-15 18:39:38.881725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.881741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-07-15 18:39:38.881749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.881765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.994 [2024-07-15 18:39:38.881773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.881789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-07-15 18:39:38.881798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.881814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-07-15 18:39:38.881823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.881839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.994 [2024-07-15 18:39:38.881848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.881864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-07-15 18:39:38.881873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.882433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-07-15 18:39:38.882450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.882468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-07-15 18:39:38.882478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.882494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-07-15 18:39:38.882502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.882518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-07-15 18:39:38.882527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.882543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-07-15 18:39:38.882553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.882568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.994 [2024-07-15 18:39:38.882578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.882594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.994 [2024-07-15 18:39:38.882603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.882619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.994 [2024-07-15 18:39:38.882627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.882643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.994 [2024-07-15 18:39:38.882652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.882667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-07-15 18:39:38.882676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.882692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-07-15 18:39:38.882703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.882719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-07-15 18:39:38.882728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.882744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-07-15 18:39:38.882757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:55.994 [2024-07-15 18:39:38.882773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-07-15 18:39:38.882783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.884227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.995 [2024-07-15 18:39:38.884246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.884264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:64384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.995 [2024-07-15 18:39:38.884273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.884289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.995 [2024-07-15 18:39:38.884299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.884315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.995 [2024-07-15 18:39:38.884324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.884344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-07-15 18:39:38.884353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.884370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-07-15 18:39:38.884378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.884394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.995 [2024-07-15 18:39:38.884403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.884426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.995 [2024-07-15 18:39:38.884435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.884451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.995 [2024-07-15 18:39:38.884460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.884476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.995 [2024-07-15 18:39:38.884484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.884500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-07-15 18:39:38.884513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.884528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-07-15 18:39:38.884537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.884553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-07-15 18:39:38.884562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.884578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.995 [2024-07-15 18:39:38.884587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.884603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.995 [2024-07-15 18:39:38.884612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.884630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.995 [2024-07-15 18:39:38.884639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.884655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.995 [2024-07-15 18:39:38.884664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.884680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.995 [2024-07-15 18:39:38.884689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.884705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.995 [2024-07-15 18:39:38.884714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.884730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-07-15 18:39:38.884739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.884754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-07-15 18:39:38.884763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.884779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.995 [2024-07-15 18:39:38.884787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.884803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.995 [2024-07-15 18:39:38.884811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.884830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-07-15 18:39:38.884839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.884855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-07-15 18:39:38.884863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.884880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-07-15 18:39:38.884888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.887081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.995 [2024-07-15 18:39:38.887104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.887121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.995 [2024-07-15 18:39:38.887131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.887147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.995 [2024-07-15 18:39:38.887155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.887171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-07-15 18:39:38.887179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.887195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-07-15 18:39:38.887204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.887219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-07-15 18:39:38.887228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.887244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-07-15 18:39:38.887252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.887267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-07-15 18:39:38.887276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.887291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-07-15 18:39:38.887299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.887320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-07-15 18:39:38.887329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.887351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-07-15 18:39:38.887361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.887376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-07-15 18:39:38.887385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.887401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-07-15 18:39:38.887410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.887425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-07-15 18:39:38.887434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:55.995 [2024-07-15 18:39:38.887449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-07-15 18:39:38.887457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.887472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-07-15 18:39:38.887480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.887496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-07-15 18:39:38.887504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.887520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.996 [2024-07-15 18:39:38.887528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.887543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.996 [2024-07-15 18:39:38.887551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.887567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-07-15 18:39:38.887575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.887590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.996 [2024-07-15 18:39:38.887599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.887614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.996 [2024-07-15 18:39:38.887624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.887639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-07-15 18:39:38.887647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.887663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.996 [2024-07-15 18:39:38.887671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.887688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.996 [2024-07-15 18:39:38.887696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.887712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.996 [2024-07-15 18:39:38.887720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.887736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-07-15 18:39:38.887745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.887761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.996 [2024-07-15 18:39:38.887769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.887785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-07-15 18:39:38.887793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.887809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-07-15 18:39:38.887817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.887833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.996 [2024-07-15 18:39:38.887842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.887857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.996 [2024-07-15 18:39:38.887865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.887881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.996 [2024-07-15 18:39:38.887889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.887904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.996 [2024-07-15 18:39:38.887914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.887929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.996 [2024-07-15 18:39:38.887937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.887952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.996 [2024-07-15 18:39:38.887960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.887976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.996 [2024-07-15 18:39:38.887984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.889304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-07-15 18:39:38.889323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.889347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-07-15 18:39:38.889357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.889373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-07-15 18:39:38.889381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.889397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-07-15 18:39:38.889405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.889420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-07-15 18:39:38.889429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.889444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-07-15 18:39:38.889452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.889468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-07-15 18:39:38.889476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.889491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-07-15 18:39:38.889499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.889515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-07-15 18:39:38.889524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.889542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-07-15 18:39:38.889550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.889566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-07-15 18:39:38.889575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.889591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-07-15 18:39:38.889600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.889615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.996 [2024-07-15 18:39:38.889623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.889638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-07-15 18:39:38.889646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.889662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.996 [2024-07-15 18:39:38.889670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.889686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.996 [2024-07-15 18:39:38.889694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.889709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.996 [2024-07-15 18:39:38.889717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.889733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-07-15 18:39:38.889741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.889756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-07-15 18:39:38.889764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.889780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-07-15 18:39:38.889788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.889804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-07-15 18:39:38.889813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.889830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-07-15 18:39:38.889839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.889855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-07-15 18:39:38.889864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:55.996 [2024-07-15 18:39:38.889880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-07-15 18:39:38.889888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.889904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.997 [2024-07-15 18:39:38.889912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.889927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-07-15 18:39:38.889937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.889952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.997 [2024-07-15 18:39:38.889961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.889976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.997 [2024-07-15 18:39:38.889985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.890000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.997 [2024-07-15 18:39:38.890008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.890023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.997 [2024-07-15 18:39:38.890032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.890047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-07-15 18:39:38.890056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.890071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.997 [2024-07-15 18:39:38.890080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.890095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.997 [2024-07-15 18:39:38.890104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.890118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.997 [2024-07-15 18:39:38.890128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.891596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.997 [2024-07-15 18:39:38.891615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.891634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.997 [2024-07-15 18:39:38.891643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.891658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.997 [2024-07-15 18:39:38.891667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.891683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-07-15 18:39:38.891692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.891706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-07-15 18:39:38.891715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.891729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-07-15 18:39:38.891738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.891752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-07-15 18:39:38.891761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.891776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-07-15 18:39:38.891784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.891799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-07-15 18:39:38.891807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.891822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-07-15 18:39:38.891830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.891845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-07-15 18:39:38.891853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.891868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.997 [2024-07-15 18:39:38.891880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.891895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.997 [2024-07-15 18:39:38.891904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.891918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.997 [2024-07-15 18:39:38.891927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.891942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.997 [2024-07-15 18:39:38.891951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.891965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:65168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.997 [2024-07-15 18:39:38.891974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.891988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-07-15 18:39:38.891998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.892013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-07-15 18:39:38.892021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.892036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-07-15 18:39:38.892045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.892060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-07-15 18:39:38.892069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.892084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-07-15 18:39:38.892093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.892109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-07-15 18:39:38.892118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.892133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-07-15 18:39:38.892141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.892156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.997 [2024-07-15 18:39:38.892165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.892182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-07-15 18:39:38.892191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.893036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-07-15 18:39:38.893053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.893071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-07-15 18:39:38.893080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.893095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-07-15 18:39:38.893104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.893120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-07-15 18:39:38.893128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.893143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.997 [2024-07-15 18:39:38.893153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.893167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.997 [2024-07-15 18:39:38.893176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.893192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.997 [2024-07-15 18:39:38.893200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.893216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.997 [2024-07-15 18:39:38.893225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.893239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.997 [2024-07-15 18:39:38.893249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.893265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.997 [2024-07-15 18:39:38.893273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.893288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.997 [2024-07-15 18:39:38.893297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:55.997 [2024-07-15 18:39:38.894119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-07-15 18:39:38.894136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-07-15 18:39:38.894162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-07-15 18:39:38.894186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-07-15 18:39:38.894210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-07-15 18:39:38.894234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-07-15 18:39:38.894258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-07-15 18:39:38.894282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-07-15 18:39:38.894306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-07-15 18:39:38.894330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-07-15 18:39:38.894360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.998 [2024-07-15 18:39:38.894383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-07-15 18:39:38.894407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-07-15 18:39:38.894434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-07-15 18:39:38.894458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-07-15 18:39:38.894482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.998 [2024-07-15 18:39:38.894505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.998 [2024-07-15 18:39:38.894534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.998 [2024-07-15 18:39:38.894558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-07-15 18:39:38.894581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-07-15 18:39:38.894605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-07-15 18:39:38.894630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.998 [2024-07-15 18:39:38.894654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.998 [2024-07-15 18:39:38.894678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.998 [2024-07-15 18:39:38.894701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.998 [2024-07-15 18:39:38.894727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.998 [2024-07-15 18:39:38.894752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.998 [2024-07-15 18:39:38.894777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.998 [2024-07-15 18:39:38.894801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-07-15 18:39:38.894825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-07-15 18:39:38.894850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.998 [2024-07-15 18:39:38.894874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.998 [2024-07-15 18:39:38.894897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.998 [2024-07-15 18:39:38.894922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-07-15 18:39:38.894945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-07-15 18:39:38.894969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.894984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.998 [2024-07-15 18:39:38.894992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.895008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.998 [2024-07-15 18:39:38.895018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.895034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.998 [2024-07-15 18:39:38.895043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.897031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.998 [2024-07-15 18:39:38.897048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.897063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.998 [2024-07-15 18:39:38.897070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.897082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-07-15 18:39:38.897090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.897102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-07-15 18:39:38.897118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.897131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-07-15 18:39:38.897137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.897149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-07-15 18:39:38.897156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.897169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-07-15 18:39:38.897177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.897189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:65448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.998 [2024-07-15 18:39:38.897196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.897208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.998 [2024-07-15 18:39:38.897215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.897228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.998 [2024-07-15 18:39:38.897234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:55.998 [2024-07-15 18:39:38.897247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.998 [2024-07-15 18:39:38.897255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.897270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-07-15 18:39:38.897278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.897290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-07-15 18:39:38.897297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.897310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-07-15 18:39:38.897317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.897330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-07-15 18:39:38.897342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.897354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-07-15 18:39:38.897362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.897375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-07-15 18:39:38.897383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.897395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-07-15 18:39:38.897403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.897415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.999 [2024-07-15 18:39:38.897422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.897434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:65168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.999 [2024-07-15 18:39:38.897442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.897454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-07-15 18:39:38.897461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.897473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.999 [2024-07-15 18:39:38.897481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.897494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.999 [2024-07-15 18:39:38.897502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.897516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.999 [2024-07-15 18:39:38.897523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.897535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.999 [2024-07-15 18:39:38.897543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.897556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-07-15 18:39:38.897564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.897577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.999 [2024-07-15 18:39:38.897585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.897598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-07-15 18:39:38.897605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.897618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.999 [2024-07-15 18:39:38.897625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.897638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.999 [2024-07-15 18:39:38.897644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.897657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.999 [2024-07-15 18:39:38.897664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.897676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.999 [2024-07-15 18:39:38.897683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.897695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.999 [2024-07-15 18:39:38.897702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.898111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-07-15 18:39:38.898123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.898137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.999 [2024-07-15 18:39:38.898146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.898158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.999 [2024-07-15 18:39:38.898168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.898181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-07-15 18:39:38.898188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.898201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-07-15 18:39:38.898208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.898220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-07-15 18:39:38.898227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.898239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-07-15 18:39:38.898247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.898260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-07-15 18:39:38.898267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.898279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-07-15 18:39:38.898286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.898298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-07-15 18:39:38.898306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.898318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.999 [2024-07-15 18:39:38.898326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.898344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.999 [2024-07-15 18:39:38.898353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.898366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.999 [2024-07-15 18:39:38.898374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.898386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:65672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.999 [2024-07-15 18:39:38.898394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.898407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.999 [2024-07-15 18:39:38.898419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.899766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:65456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.999 [2024-07-15 18:39:38.899783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.899798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:65520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.999 [2024-07-15 18:39:38.899806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.899820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.999 [2024-07-15 18:39:38.899828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.899842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-07-15 18:39:38.899849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:55.999 [2024-07-15 18:39:38.899862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-07-15 18:39:38.899869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.899883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.000 [2024-07-15 18:39:38.899891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.899903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:65424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.000 [2024-07-15 18:39:38.899910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.899923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.000 [2024-07-15 18:39:38.899929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.899942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-07-15 18:39:38.899949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.899962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-07-15 18:39:38.899969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.899982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:65448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.000 [2024-07-15 18:39:38.899989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.900002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.000 [2024-07-15 18:39:38.900009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.900025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-07-15 18:39:38.900032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.900045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-07-15 18:39:38.900052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.900065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-07-15 18:39:38.900072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.900084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-07-15 18:39:38.900092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.900104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.000 [2024-07-15 18:39:38.900111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.900123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.000 [2024-07-15 18:39:38.900131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.900145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.000 [2024-07-15 18:39:38.900152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.900165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-07-15 18:39:38.900172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.900185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-07-15 18:39:38.900192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.900205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.000 [2024-07-15 18:39:38.900212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.900224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.000 [2024-07-15 18:39:38.900231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.900244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.000 [2024-07-15 18:39:38.900251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.900266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.000 [2024-07-15 18:39:38.900273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.900287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.000 [2024-07-15 18:39:38.900295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.900307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-07-15 18:39:38.900314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.900326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-07-15 18:39:38.900334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.900351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-07-15 18:39:38.900358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.900371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-07-15 18:39:38.900378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.900391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.000 [2024-07-15 18:39:38.900399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.900412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.000 [2024-07-15 18:39:38.900420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.900915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-07-15 18:39:38.900928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.900944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-07-15 18:39:38.900952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.900964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-07-15 18:39:38.900972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.900985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-07-15 18:39:38.900992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.901005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-07-15 18:39:38.901015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.901027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-07-15 18:39:38.901034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.901046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-07-15 18:39:38.901054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.901067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.000 [2024-07-15 18:39:38.901074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.901087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.000 [2024-07-15 18:39:38.901094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.901106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.000 [2024-07-15 18:39:38.901114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.901127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.000 [2024-07-15 18:39:38.901134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.901147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.000 [2024-07-15 18:39:38.901154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.901167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.000 [2024-07-15 18:39:38.901175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.901841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.000 [2024-07-15 18:39:38.901855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.901871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-07-15 18:39:38.901880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.901893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-07-15 18:39:38.901901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.901914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-07-15 18:39:38.901924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.901937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-07-15 18:39:38.901945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.901957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-07-15 18:39:38.901965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:56.000 [2024-07-15 18:39:38.901977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-07-15 18:39:38.901985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.901997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-07-15 18:39:38.902005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.902017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.001 [2024-07-15 18:39:38.902024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.902037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-07-15 18:39:38.902044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.902056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.001 [2024-07-15 18:39:38.902064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.902076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.001 [2024-07-15 18:39:38.902083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.902096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-07-15 18:39:38.902103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.902116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.001 [2024-07-15 18:39:38.902123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.902136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-07-15 18:39:38.902144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.902156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-07-15 18:39:38.902164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.902178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.001 [2024-07-15 18:39:38.902185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.902197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-07-15 18:39:38.902203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.902216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.001 [2024-07-15 18:39:38.902222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.902234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.001 [2024-07-15 18:39:38.902241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.902253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.001 [2024-07-15 18:39:38.902259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.902271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-07-15 18:39:38.902277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.902289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-07-15 18:39:38.902295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.902307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.001 [2024-07-15 18:39:38.902314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.902641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.001 [2024-07-15 18:39:38.902655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.902670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.001 [2024-07-15 18:39:38.902677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.902689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.001 [2024-07-15 18:39:38.902696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.902708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.001 [2024-07-15 18:39:38.902715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.902729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-07-15 18:39:38.902736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.902748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-07-15 18:39:38.902755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.902767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-07-15 18:39:38.902773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.902785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.001 [2024-07-15 18:39:38.902791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.902803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.001 [2024-07-15 18:39:38.902810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.902822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.001 [2024-07-15 18:39:38.902829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.904132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-07-15 18:39:38.904148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.904163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-07-15 18:39:38.904170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.904183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-07-15 18:39:38.904190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.904202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-07-15 18:39:38.904210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.904222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-07-15 18:39:38.904230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.904242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-07-15 18:39:38.904249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.904262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.001 [2024-07-15 18:39:38.904272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.904285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.001 [2024-07-15 18:39:38.904292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.904305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.001 [2024-07-15 18:39:38.904312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.904325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.001 [2024-07-15 18:39:38.904332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.904350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-07-15 18:39:38.904357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.904370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-07-15 18:39:38.904376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.904389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-07-15 18:39:38.904396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.904409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-07-15 18:39:38.904416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.904429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-07-15 18:39:38.904437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.904450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.001 [2024-07-15 18:39:38.904457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.904470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.001 [2024-07-15 18:39:38.904477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.904489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-07-15 18:39:38.904496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:56.001 [2024-07-15 18:39:38.904508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-07-15 18:39:38.904528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.904541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.002 [2024-07-15 18:39:38.904548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.904562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-07-15 18:39:38.904569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.904582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.002 [2024-07-15 18:39:38.904589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.904601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.002 [2024-07-15 18:39:38.904608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.904628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.002 [2024-07-15 18:39:38.904636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.904648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-07-15 18:39:38.904655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.904668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.002 [2024-07-15 18:39:38.904674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.904687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.002 [2024-07-15 18:39:38.904694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.904706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.002 [2024-07-15 18:39:38.904713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.904726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.002 [2024-07-15 18:39:38.904733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.904746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-07-15 18:39:38.904753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.904764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-07-15 18:39:38.904773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.904786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-07-15 18:39:38.904793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.904805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.002 [2024-07-15 18:39:38.904812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.904825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.002 [2024-07-15 18:39:38.904832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.904844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.002 [2024-07-15 18:39:38.904852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.906344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-07-15 18:39:38.906362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.906378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-07-15 18:39:38.906385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.906399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-07-15 18:39:38.906406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.906419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-07-15 18:39:38.906427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.906440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-07-15 18:39:38.906448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.906460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-07-15 18:39:38.906468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.906480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-07-15 18:39:38.906487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.906500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-07-15 18:39:38.906508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.906523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-07-15 18:39:38.906531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.906544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-07-15 18:39:38.906552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.906565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.002 [2024-07-15 18:39:38.906571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.906585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.002 [2024-07-15 18:39:38.906592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.906604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.002 [2024-07-15 18:39:38.906611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.906624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-07-15 18:39:38.906632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.906644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-07-15 18:39:38.906651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.906663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-07-15 18:39:38.906670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.906682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.002 [2024-07-15 18:39:38.906690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.906702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.002 [2024-07-15 18:39:38.906709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.906722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-07-15 18:39:38.906730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.906742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-07-15 18:39:38.906750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.906764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.002 [2024-07-15 18:39:38.906771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.907317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-07-15 18:39:38.907333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.907355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.002 [2024-07-15 18:39:38.907362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.907376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.002 [2024-07-15 18:39:38.907384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.907396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.002 [2024-07-15 18:39:38.907404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.907416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.002 [2024-07-15 18:39:38.907423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.907436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.002 [2024-07-15 18:39:38.907443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.907456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-07-15 18:39:38.907462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.907475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-07-15 18:39:38.907482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.907494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.002 [2024-07-15 18:39:38.907501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.907514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.002 [2024-07-15 18:39:38.907521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:56.002 [2024-07-15 18:39:38.907534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.003 [2024-07-15 18:39:38.907541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.907554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.003 [2024-07-15 18:39:38.907563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.907576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.003 [2024-07-15 18:39:38.907583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.907596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-07-15 18:39:38.907603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.907616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-07-15 18:39:38.907623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.907635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-07-15 18:39:38.907643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.907655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.003 [2024-07-15 18:39:38.907662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.907675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-07-15 18:39:38.907682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.907695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-07-15 18:39:38.907702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.907715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-07-15 18:39:38.907722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.907734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-07-15 18:39:38.907742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.907754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-07-15 18:39:38.907761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.907775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-07-15 18:39:38.907782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.907795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.003 [2024-07-15 18:39:38.907804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.907818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:66280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.003 [2024-07-15 18:39:38.907825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.908735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.003 [2024-07-15 18:39:38.908751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.908767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-07-15 18:39:38.908775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.908788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-07-15 18:39:38.908796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.908809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.003 [2024-07-15 18:39:38.908816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.908829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-07-15 18:39:38.908836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.908850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-07-15 18:39:38.908856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.908870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-07-15 18:39:38.908877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.908890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-07-15 18:39:38.908896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.908910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-07-15 18:39:38.908917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.908929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.003 [2024-07-15 18:39:38.908936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.908948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-07-15 18:39:38.908969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.908984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-07-15 18:39:38.908991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.909003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.003 [2024-07-15 18:39:38.909010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.909023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-07-15 18:39:38.909031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.909387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.003 [2024-07-15 18:39:38.909402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.909416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.003 [2024-07-15 18:39:38.909424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.909437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.003 [2024-07-15 18:39:38.909444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.909458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-07-15 18:39:38.909465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.909478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-07-15 18:39:38.909485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.909498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-07-15 18:39:38.909505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.909517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.003 [2024-07-15 18:39:38.909525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.909537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.003 [2024-07-15 18:39:38.909544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.909556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.003 [2024-07-15 18:39:38.909563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.909585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.003 [2024-07-15 18:39:38.909593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.909605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-07-15 18:39:38.909613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.909625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.003 [2024-07-15 18:39:38.909632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.909645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.003 [2024-07-15 18:39:38.909652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:56.003 [2024-07-15 18:39:38.909664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-07-15 18:39:38.909671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.909683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-07-15 18:39:38.909691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.909703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-07-15 18:39:38.909710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.909723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-07-15 18:39:38.909730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.909742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-07-15 18:39:38.909749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.909763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.004 [2024-07-15 18:39:38.909771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.910685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-07-15 18:39:38.910701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.910715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-07-15 18:39:38.910723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.910735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-07-15 18:39:38.910746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.910759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-07-15 18:39:38.910766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.910778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-07-15 18:39:38.910786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.910798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-07-15 18:39:38.910805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.910818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-07-15 18:39:38.910825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.910838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.004 [2024-07-15 18:39:38.910845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.910857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.004 [2024-07-15 18:39:38.910865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.910878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:66440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.004 [2024-07-15 18:39:38.910884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.910897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.004 [2024-07-15 18:39:38.910905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.910917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:66504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.004 [2024-07-15 18:39:38.910925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.910938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.004 [2024-07-15 18:39:38.910945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.910958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-07-15 18:39:38.910965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.910978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.004 [2024-07-15 18:39:38.910987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.911000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-07-15 18:39:38.911008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.911020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-07-15 18:39:38.911027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.911040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.004 [2024-07-15 18:39:38.911047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.911059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-07-15 18:39:38.911066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.911079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-07-15 18:39:38.911087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.911100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.004 [2024-07-15 18:39:38.911107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.911120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.004 [2024-07-15 18:39:38.911127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.911140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.004 [2024-07-15 18:39:38.911147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.911160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.004 [2024-07-15 18:39:38.911167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.911180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-07-15 18:39:38.911187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.911200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-07-15 18:39:38.911207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.911220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.004 [2024-07-15 18:39:38.911227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.911241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.004 [2024-07-15 18:39:38.911249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.911261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.004 [2024-07-15 18:39:38.911268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.911281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-07-15 18:39:38.911288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.911300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-07-15 18:39:38.911308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.911321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-07-15 18:39:38.911328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.912979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:66528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.004 [2024-07-15 18:39:38.912998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.913023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:66560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.004 [2024-07-15 18:39:38.913032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.913044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:66592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.004 [2024-07-15 18:39:38.913052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.913065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-07-15 18:39:38.913073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.913085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-07-15 18:39:38.913093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.913105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-07-15 18:39:38.913111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.913123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-07-15 18:39:38.913130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.913146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-07-15 18:39:38.913153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.913166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-07-15 18:39:38.913174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.913187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:66944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-07-15 18:39:38.913193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.913206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-07-15 18:39:38.913213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:56.004 [2024-07-15 18:39:38.913226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.004 [2024-07-15 18:39:38.913233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:56.005 [2024-07-15 18:39:38.913246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.005 [2024-07-15 18:39:38.913253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:56.005 Received shutdown signal, test time was about 27.377721 seconds 00:24:56.005 00:24:56.005 Latency(us) 00:24:56.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.005 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:56.005 Verification LBA range: start 0x0 length 0x4000 00:24:56.005 Nvme0n1 : 27.38 10819.02 42.26 0.00 0.00 11811.38 133.61 3019898.88 00:24:56.005 =================================================================================================================== 00:24:56.005 Total : 10819.02 42.26 0.00 0.00 11811.38 133.61 3019898.88 00:24:56.005 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:56.005 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:56.005 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:56.005 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:56.005 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:56.005 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:24:56.005 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:56.005 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:24:56.005 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:56.005 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:56.005 rmmod nvme_tcp 00:24:56.005 rmmod nvme_fabrics 00:24:56.005 rmmod nvme_keyring 00:24:56.264 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:56.264 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:24:56.264 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:24:56.264 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 4025670 ']' 00:24:56.264 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 4025670 00:24:56.264 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 4025670 ']' 00:24:56.264 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 4025670 00:24:56.264 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:24:56.264 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:56.264 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4025670 00:24:56.264 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:56.264 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:56.264 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4025670' 00:24:56.264 killing process with pid 4025670 00:24:56.264 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 4025670 00:24:56.264 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 4025670 00:24:56.264 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:56.264 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:56.264 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:56.264 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:56.264 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:56.264 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.264 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:56.264 18:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.798 18:39:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:58.798 00:24:58.798 real 0m39.951s 00:24:58.798 user 1m47.673s 00:24:58.798 sys 0m10.820s 00:24:58.798 18:39:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:58.798 18:39:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:58.798 ************************************ 00:24:58.798 END TEST nvmf_host_multipath_status 00:24:58.798 ************************************ 00:24:58.798 18:39:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:58.798 18:39:43 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:58.798 18:39:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:58.798 18:39:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:58.798 18:39:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:58.798 ************************************ 00:24:58.798 START TEST nvmf_discovery_remove_ifc 00:24:58.798 ************************************ 00:24:58.798 18:39:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:58.798 * Looking for test storage... 00:24:58.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:58.798 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:24:58.799 18:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:04.069 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:04.069 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:04.069 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:04.070 Found net devices under 0000:86:00.0: cvl_0_0 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:04.070 Found net devices under 0000:86:00.1: cvl_0_1 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:04.070 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:04.328 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:04.328 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:04.328 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:04.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:04.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:25:04.328 00:25:04.328 --- 10.0.0.2 ping statistics --- 00:25:04.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:04.328 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:25:04.328 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:04.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:04.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:25:04.328 00:25:04.328 --- 10.0.0.1 ping statistics --- 00:25:04.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:04.328 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:25:04.328 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:04.328 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:25:04.328 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:04.328 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:04.328 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:04.328 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:04.328 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:04.328 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:04.328 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:04.328 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:04.328 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:04.328 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:04.328 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:04.328 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=4034508 00:25:04.328 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 4034508 00:25:04.328 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:04.328 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 4034508 ']' 00:25:04.328 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:04.329 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:04.329 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:04.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:04.329 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:04.329 18:39:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:04.329 [2024-07-15 18:39:49.777406] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:25:04.329 [2024-07-15 18:39:49.777450] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:04.329 EAL: No free 2048 kB hugepages reported on node 1 00:25:04.329 [2024-07-15 18:39:49.848014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.587 [2024-07-15 18:39:49.926813] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:04.587 [2024-07-15 18:39:49.926849] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:04.587 [2024-07-15 18:39:49.926857] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:04.587 [2024-07-15 18:39:49.926863] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:04.587 [2024-07-15 18:39:49.926868] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:04.587 [2024-07-15 18:39:49.926887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.153 18:39:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:05.153 18:39:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:25:05.153 18:39:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:05.153 18:39:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:05.153 18:39:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:05.153 18:39:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:05.153 18:39:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:05.153 18:39:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.153 18:39:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:05.153 [2024-07-15 18:39:50.626279] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:05.153 [2024-07-15 18:39:50.634413] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:05.153 null0 00:25:05.153 [2024-07-15 18:39:50.666413] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:05.153 18:39:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.153 18:39:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=4034747 00:25:05.153 18:39:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:05.153 18:39:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 4034747 /tmp/host.sock 00:25:05.153 18:39:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 4034747 ']' 00:25:05.153 18:39:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:25:05.153 18:39:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:05.153 18:39:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:05.153 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:05.153 18:39:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:05.153 18:39:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:05.411 [2024-07-15 18:39:50.733421] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:25:05.411 [2024-07-15 18:39:50.733462] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4034747 ] 00:25:05.411 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.411 [2024-07-15 18:39:50.799418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.411 [2024-07-15 18:39:50.877535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.978 18:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:05.978 18:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:25:05.978 18:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:05.978 18:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:05.978 18:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.978 18:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:06.236 18:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.236 18:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:06.236 18:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.236 18:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:06.236 18:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.236 18:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:06.236 18:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.236 18:39:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:07.170 [2024-07-15 18:39:52.675518] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:07.170 [2024-07-15 18:39:52.675538] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:07.170 [2024-07-15 18:39:52.675551] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:07.428 [2024-07-15 18:39:52.761813] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:07.428 [2024-07-15 18:39:52.906771] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:07.428 [2024-07-15 18:39:52.906813] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:07.428 [2024-07-15 18:39:52.906833] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:07.428 [2024-07-15 18:39:52.906845] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:07.428 [2024-07-15 18:39:52.906863] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:07.428 18:39:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.428 18:39:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:07.428 18:39:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:07.428 18:39:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:07.428 [2024-07-15 18:39:52.913168] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x134de30 was disconnected and freed. delete nvme_qpair. 00:25:07.428 18:39:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:07.428 18:39:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.428 18:39:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:07.428 18:39:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:07.428 18:39:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:07.428 18:39:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.428 18:39:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:07.428 18:39:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:07.428 18:39:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:07.686 18:39:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:07.686 18:39:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:07.686 18:39:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:07.686 18:39:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.686 18:39:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:07.686 18:39:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:07.686 18:39:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:07.686 18:39:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:07.686 18:39:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.686 18:39:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:07.686 18:39:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:08.628 18:39:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:08.628 18:39:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:08.628 18:39:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:08.628 18:39:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.628 18:39:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:08.628 18:39:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.628 18:39:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:08.628 18:39:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.628 18:39:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:08.628 18:39:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:10.000 18:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:10.000 18:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:10.000 18:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:10.000 18:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.000 18:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:10.000 18:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:10.000 18:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:10.000 18:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.000 18:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:10.000 18:39:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:10.931 18:39:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:10.931 18:39:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:10.931 18:39:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:10.931 18:39:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.931 18:39:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:10.931 18:39:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:10.931 18:39:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:10.931 18:39:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.931 18:39:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:10.931 18:39:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:11.864 18:39:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:11.864 18:39:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:11.864 18:39:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:11.864 18:39:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.864 18:39:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:11.864 18:39:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:11.864 18:39:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:11.864 18:39:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.864 18:39:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:11.864 18:39:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:12.801 18:39:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:12.801 18:39:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:12.801 18:39:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:12.801 18:39:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.801 18:39:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:12.801 18:39:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:12.801 18:39:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:12.801 [2024-07-15 18:39:58.348268] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:12.801 [2024-07-15 18:39:58.348305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.801 [2024-07-15 18:39:58.348315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.801 [2024-07-15 18:39:58.348343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.801 [2024-07-15 18:39:58.348350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.801 [2024-07-15 18:39:58.348357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.801 [2024-07-15 18:39:58.348363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.801 [2024-07-15 18:39:58.348370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.801 [2024-07-15 18:39:58.348376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.801 [2024-07-15 18:39:58.348384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.801 [2024-07-15 18:39:58.348390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.801 [2024-07-15 18:39:58.348397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314690 is same with the state(5) to be set 00:25:12.801 18:39:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.059 [2024-07-15 18:39:58.358290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1314690 (9): Bad file descriptor 00:25:13.059 [2024-07-15 18:39:58.368328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:13.059 18:39:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:13.059 18:39:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:13.994 [2024-07-15 18:39:59.377453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:13.994 [2024-07-15 18:39:59.377531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1314690 with addr=10.0.0.2, port=4420 00:25:13.994 [2024-07-15 18:39:59.377562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314690 is same with the state(5) to be set 00:25:13.994 [2024-07-15 18:39:59.377614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1314690 (9): Bad file descriptor 00:25:13.994 [2024-07-15 18:39:59.377714] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:13.994 [2024-07-15 18:39:59.377753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:13.994 [2024-07-15 18:39:59.377774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:13.994 [2024-07-15 18:39:59.377795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:13.994 [2024-07-15 18:39:59.377833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.994 [2024-07-15 18:39:59.377865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:13.994 18:39:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:13.994 18:39:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:13.994 18:39:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:13.994 18:39:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.994 18:39:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:13.994 18:39:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:13.994 18:39:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:13.994 18:39:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.995 18:39:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:13.995 18:39:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:14.954 [2024-07-15 18:40:00.380361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:14.954 [2024-07-15 18:40:00.380385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:14.954 [2024-07-15 18:40:00.380392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:14.954 [2024-07-15 18:40:00.380399] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:25:14.954 [2024-07-15 18:40:00.380412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.954 [2024-07-15 18:40:00.380431] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:14.954 [2024-07-15 18:40:00.380452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.954 [2024-07-15 18:40:00.380461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.954 [2024-07-15 18:40:00.380471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.954 [2024-07-15 18:40:00.380477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.954 [2024-07-15 18:40:00.380484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.954 [2024-07-15 18:40:00.380490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.954 [2024-07-15 18:40:00.380497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.954 [2024-07-15 18:40:00.380503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.954 [2024-07-15 18:40:00.380510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.954 [2024-07-15 18:40:00.380533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.954 [2024-07-15 18:40:00.380539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:14.954 [2024-07-15 18:40:00.380659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1313a80 (9): Bad file descriptor 00:25:14.954 [2024-07-15 18:40:00.381669] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:14.954 [2024-07-15 18:40:00.381679] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:25:14.954 18:40:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:14.954 18:40:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:14.954 18:40:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:14.954 18:40:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.954 18:40:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:14.954 18:40:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:14.954 18:40:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:14.954 18:40:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.954 18:40:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:14.954 18:40:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:14.954 18:40:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:15.213 18:40:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:15.213 18:40:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:15.213 18:40:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:15.213 18:40:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:15.213 18:40:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.213 18:40:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:15.213 18:40:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:15.213 18:40:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:15.213 18:40:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.213 18:40:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:15.214 18:40:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:16.148 18:40:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:16.149 18:40:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:16.149 18:40:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:16.149 18:40:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.149 18:40:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:16.149 18:40:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:16.149 18:40:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:16.149 18:40:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.149 18:40:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:16.149 18:40:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:17.084 [2024-07-15 18:40:02.439862] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:17.084 [2024-07-15 18:40:02.439887] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:17.084 [2024-07-15 18:40:02.439903] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:17.084 [2024-07-15 18:40:02.566286] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:17.342 18:40:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:17.342 18:40:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:17.342 18:40:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:17.342 18:40:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.342 18:40:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:17.342 18:40:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:17.342 18:40:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:17.342 18:40:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.342 18:40:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:17.342 18:40:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:17.342 [2024-07-15 18:40:02.742843] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:17.342 [2024-07-15 18:40:02.742878] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:17.342 [2024-07-15 18:40:02.742895] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:17.342 [2024-07-15 18:40:02.742908] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:17.342 [2024-07-15 18:40:02.742915] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:17.342 [2024-07-15 18:40:02.748478] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x132a8d0 was disconnected and freed. delete nvme_qpair. 00:25:18.277 18:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:18.277 18:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:18.277 18:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:18.277 18:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.277 18:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:18.277 18:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:18.277 18:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:18.277 18:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.277 18:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:18.277 18:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:18.277 18:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 4034747 00:25:18.277 18:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 4034747 ']' 00:25:18.277 18:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 4034747 00:25:18.277 18:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:25:18.277 18:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:18.277 18:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4034747 00:25:18.277 18:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:18.277 18:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:18.277 18:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4034747' 00:25:18.277 killing process with pid 4034747 00:25:18.277 18:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 4034747 00:25:18.277 18:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 4034747 00:25:18.537 18:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:18.537 18:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:18.537 18:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:25:18.537 18:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:18.537 18:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:25:18.537 18:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:18.537 18:40:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:18.537 rmmod nvme_tcp 00:25:18.537 rmmod nvme_fabrics 00:25:18.537 rmmod nvme_keyring 00:25:18.537 18:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:18.537 18:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:25:18.537 18:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:25:18.537 18:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 4034508 ']' 00:25:18.537 18:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 4034508 00:25:18.537 18:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 4034508 ']' 00:25:18.537 18:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 4034508 00:25:18.537 18:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:25:18.537 18:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:18.537 18:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4034508 00:25:18.796 18:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:18.796 18:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:18.796 18:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4034508' 00:25:18.796 killing process with pid 4034508 00:25:18.796 18:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 4034508 00:25:18.796 18:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 4034508 00:25:18.796 18:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:18.796 18:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:18.796 18:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:18.796 18:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:18.796 18:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:18.796 18:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.796 18:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:18.796 18:40:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.333 18:40:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:21.333 00:25:21.333 real 0m22.421s 00:25:21.333 user 0m28.941s 00:25:21.333 sys 0m5.568s 00:25:21.333 18:40:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:21.333 18:40:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:21.333 ************************************ 00:25:21.333 END TEST nvmf_discovery_remove_ifc 00:25:21.333 ************************************ 00:25:21.333 18:40:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:21.333 18:40:06 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:21.333 18:40:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:21.333 18:40:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:21.333 18:40:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:21.333 ************************************ 00:25:21.333 START TEST nvmf_identify_kernel_target 00:25:21.333 ************************************ 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:21.333 * Looking for test storage... 00:25:21.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:25:21.333 18:40:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:26.663 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:26.663 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:26.663 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:26.664 Found net devices under 0000:86:00.0: cvl_0_0 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:26.664 Found net devices under 0000:86:00.1: cvl_0_1 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:26.664 18:40:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:26.664 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:26.664 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:26.664 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:26.664 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:26.664 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:26.664 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:26.664 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:26.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:26.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:25:26.924 00:25:26.924 --- 10.0.0.2 ping statistics --- 00:25:26.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.924 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:26.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:26.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:25:26.924 00:25:26.924 --- 10.0.0.1 ping statistics --- 00:25:26.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.924 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:26.924 18:40:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:29.460 Waiting for block devices as requested 00:25:29.460 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:29.719 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:29.720 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:29.979 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:29.979 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:29.979 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:29.979 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:30.238 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:30.238 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:30.238 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:30.498 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:30.498 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:30.498 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:30.498 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:30.757 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:30.757 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:30.757 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:31.016 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:31.016 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:31.016 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:31.016 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:31.016 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:31.016 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:31.016 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:31.016 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:31.016 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:31.016 No valid GPT data, bailing 00:25:31.016 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:31.016 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:31.016 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:31.016 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:31.016 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:31.016 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:31.016 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:31.016 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:31.016 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:31.016 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:25:31.016 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:31.016 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:25:31.016 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:31.016 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:25:31.016 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:25:31.016 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:25:31.016 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:31.016 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:31.016 00:25:31.016 Discovery Log Number of Records 2, Generation counter 2 00:25:31.016 =====Discovery Log Entry 0====== 00:25:31.016 trtype: tcp 00:25:31.016 adrfam: ipv4 00:25:31.016 subtype: current discovery subsystem 00:25:31.016 treq: not specified, sq flow control disable supported 00:25:31.016 portid: 1 00:25:31.016 trsvcid: 4420 00:25:31.016 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:31.016 traddr: 10.0.0.1 00:25:31.016 eflags: none 00:25:31.016 sectype: none 00:25:31.016 =====Discovery Log Entry 1====== 00:25:31.016 trtype: tcp 00:25:31.016 adrfam: ipv4 00:25:31.016 subtype: nvme subsystem 00:25:31.016 treq: not specified, sq flow control disable supported 00:25:31.016 portid: 1 00:25:31.016 trsvcid: 4420 00:25:31.016 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:31.016 traddr: 10.0.0.1 00:25:31.016 eflags: none 00:25:31.016 sectype: none 00:25:31.017 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:31.017 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:31.017 EAL: No free 2048 kB hugepages reported on node 1 00:25:31.277 ===================================================== 00:25:31.277 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:31.277 ===================================================== 00:25:31.277 Controller Capabilities/Features 00:25:31.277 ================================ 00:25:31.277 Vendor ID: 0000 00:25:31.277 Subsystem Vendor ID: 0000 00:25:31.277 Serial Number: d4bfcf67c0f8d461c965 00:25:31.277 Model Number: Linux 00:25:31.277 Firmware Version: 6.7.0-68 00:25:31.277 Recommended Arb Burst: 0 00:25:31.277 IEEE OUI Identifier: 00 00 00 00:25:31.277 Multi-path I/O 00:25:31.277 May have multiple subsystem ports: No 00:25:31.277 May have multiple controllers: No 00:25:31.277 Associated with SR-IOV VF: No 00:25:31.277 Max Data Transfer Size: Unlimited 00:25:31.277 Max Number of Namespaces: 0 00:25:31.277 Max Number of I/O Queues: 1024 00:25:31.277 NVMe Specification Version (VS): 1.3 00:25:31.277 NVMe Specification Version (Identify): 1.3 00:25:31.277 Maximum Queue Entries: 1024 00:25:31.277 Contiguous Queues Required: No 00:25:31.277 Arbitration Mechanisms Supported 00:25:31.277 Weighted Round Robin: Not Supported 00:25:31.277 Vendor Specific: Not Supported 00:25:31.277 Reset Timeout: 7500 ms 00:25:31.277 Doorbell Stride: 4 bytes 00:25:31.277 NVM Subsystem Reset: Not Supported 00:25:31.277 Command Sets Supported 00:25:31.277 NVM Command Set: Supported 00:25:31.277 Boot Partition: Not Supported 00:25:31.277 Memory Page Size Minimum: 4096 bytes 00:25:31.277 Memory Page Size Maximum: 4096 bytes 00:25:31.277 Persistent Memory Region: Not Supported 00:25:31.277 Optional Asynchronous Events Supported 00:25:31.277 Namespace Attribute Notices: Not Supported 00:25:31.277 Firmware Activation Notices: Not Supported 00:25:31.277 ANA Change Notices: Not Supported 00:25:31.277 PLE Aggregate Log Change Notices: Not Supported 00:25:31.277 LBA Status Info Alert Notices: Not Supported 00:25:31.277 EGE Aggregate Log Change Notices: Not Supported 00:25:31.277 Normal NVM Subsystem Shutdown event: Not Supported 00:25:31.277 Zone Descriptor Change Notices: Not Supported 00:25:31.277 Discovery Log Change Notices: Supported 00:25:31.277 Controller Attributes 00:25:31.277 128-bit Host Identifier: Not Supported 00:25:31.277 Non-Operational Permissive Mode: Not Supported 00:25:31.277 NVM Sets: Not Supported 00:25:31.277 Read Recovery Levels: Not Supported 00:25:31.277 Endurance Groups: Not Supported 00:25:31.277 Predictable Latency Mode: Not Supported 00:25:31.277 Traffic Based Keep ALive: Not Supported 00:25:31.277 Namespace Granularity: Not Supported 00:25:31.277 SQ Associations: Not Supported 00:25:31.277 UUID List: Not Supported 00:25:31.277 Multi-Domain Subsystem: Not Supported 00:25:31.277 Fixed Capacity Management: Not Supported 00:25:31.277 Variable Capacity Management: Not Supported 00:25:31.277 Delete Endurance Group: Not Supported 00:25:31.277 Delete NVM Set: Not Supported 00:25:31.277 Extended LBA Formats Supported: Not Supported 00:25:31.277 Flexible Data Placement Supported: Not Supported 00:25:31.277 00:25:31.277 Controller Memory Buffer Support 00:25:31.277 ================================ 00:25:31.277 Supported: No 00:25:31.277 00:25:31.277 Persistent Memory Region Support 00:25:31.277 ================================ 00:25:31.277 Supported: No 00:25:31.277 00:25:31.277 Admin Command Set Attributes 00:25:31.277 ============================ 00:25:31.277 Security Send/Receive: Not Supported 00:25:31.277 Format NVM: Not Supported 00:25:31.277 Firmware Activate/Download: Not Supported 00:25:31.277 Namespace Management: Not Supported 00:25:31.277 Device Self-Test: Not Supported 00:25:31.277 Directives: Not Supported 00:25:31.277 NVMe-MI: Not Supported 00:25:31.277 Virtualization Management: Not Supported 00:25:31.277 Doorbell Buffer Config: Not Supported 00:25:31.277 Get LBA Status Capability: Not Supported 00:25:31.277 Command & Feature Lockdown Capability: Not Supported 00:25:31.277 Abort Command Limit: 1 00:25:31.277 Async Event Request Limit: 1 00:25:31.277 Number of Firmware Slots: N/A 00:25:31.277 Firmware Slot 1 Read-Only: N/A 00:25:31.277 Firmware Activation Without Reset: N/A 00:25:31.277 Multiple Update Detection Support: N/A 00:25:31.277 Firmware Update Granularity: No Information Provided 00:25:31.277 Per-Namespace SMART Log: No 00:25:31.277 Asymmetric Namespace Access Log Page: Not Supported 00:25:31.277 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:31.277 Command Effects Log Page: Not Supported 00:25:31.277 Get Log Page Extended Data: Supported 00:25:31.277 Telemetry Log Pages: Not Supported 00:25:31.277 Persistent Event Log Pages: Not Supported 00:25:31.277 Supported Log Pages Log Page: May Support 00:25:31.277 Commands Supported & Effects Log Page: Not Supported 00:25:31.277 Feature Identifiers & Effects Log Page:May Support 00:25:31.277 NVMe-MI Commands & Effects Log Page: May Support 00:25:31.277 Data Area 4 for Telemetry Log: Not Supported 00:25:31.277 Error Log Page Entries Supported: 1 00:25:31.277 Keep Alive: Not Supported 00:25:31.277 00:25:31.277 NVM Command Set Attributes 00:25:31.277 ========================== 00:25:31.277 Submission Queue Entry Size 00:25:31.277 Max: 1 00:25:31.277 Min: 1 00:25:31.277 Completion Queue Entry Size 00:25:31.277 Max: 1 00:25:31.277 Min: 1 00:25:31.277 Number of Namespaces: 0 00:25:31.277 Compare Command: Not Supported 00:25:31.278 Write Uncorrectable Command: Not Supported 00:25:31.278 Dataset Management Command: Not Supported 00:25:31.278 Write Zeroes Command: Not Supported 00:25:31.278 Set Features Save Field: Not Supported 00:25:31.278 Reservations: Not Supported 00:25:31.278 Timestamp: Not Supported 00:25:31.278 Copy: Not Supported 00:25:31.278 Volatile Write Cache: Not Present 00:25:31.278 Atomic Write Unit (Normal): 1 00:25:31.278 Atomic Write Unit (PFail): 1 00:25:31.278 Atomic Compare & Write Unit: 1 00:25:31.278 Fused Compare & Write: Not Supported 00:25:31.278 Scatter-Gather List 00:25:31.278 SGL Command Set: Supported 00:25:31.278 SGL Keyed: Not Supported 00:25:31.278 SGL Bit Bucket Descriptor: Not Supported 00:25:31.278 SGL Metadata Pointer: Not Supported 00:25:31.278 Oversized SGL: Not Supported 00:25:31.278 SGL Metadata Address: Not Supported 00:25:31.278 SGL Offset: Supported 00:25:31.278 Transport SGL Data Block: Not Supported 00:25:31.278 Replay Protected Memory Block: Not Supported 00:25:31.278 00:25:31.278 Firmware Slot Information 00:25:31.278 ========================= 00:25:31.278 Active slot: 0 00:25:31.278 00:25:31.278 00:25:31.278 Error Log 00:25:31.278 ========= 00:25:31.278 00:25:31.278 Active Namespaces 00:25:31.278 ================= 00:25:31.278 Discovery Log Page 00:25:31.278 ================== 00:25:31.278 Generation Counter: 2 00:25:31.278 Number of Records: 2 00:25:31.278 Record Format: 0 00:25:31.278 00:25:31.278 Discovery Log Entry 0 00:25:31.278 ---------------------- 00:25:31.278 Transport Type: 3 (TCP) 00:25:31.278 Address Family: 1 (IPv4) 00:25:31.278 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:31.278 Entry Flags: 00:25:31.278 Duplicate Returned Information: 0 00:25:31.278 Explicit Persistent Connection Support for Discovery: 0 00:25:31.278 Transport Requirements: 00:25:31.278 Secure Channel: Not Specified 00:25:31.278 Port ID: 1 (0x0001) 00:25:31.278 Controller ID: 65535 (0xffff) 00:25:31.278 Admin Max SQ Size: 32 00:25:31.278 Transport Service Identifier: 4420 00:25:31.278 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:31.278 Transport Address: 10.0.0.1 00:25:31.278 Discovery Log Entry 1 00:25:31.278 ---------------------- 00:25:31.278 Transport Type: 3 (TCP) 00:25:31.278 Address Family: 1 (IPv4) 00:25:31.278 Subsystem Type: 2 (NVM Subsystem) 00:25:31.278 Entry Flags: 00:25:31.278 Duplicate Returned Information: 0 00:25:31.278 Explicit Persistent Connection Support for Discovery: 0 00:25:31.278 Transport Requirements: 00:25:31.278 Secure Channel: Not Specified 00:25:31.278 Port ID: 1 (0x0001) 00:25:31.278 Controller ID: 65535 (0xffff) 00:25:31.278 Admin Max SQ Size: 32 00:25:31.278 Transport Service Identifier: 4420 00:25:31.278 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:31.278 Transport Address: 10.0.0.1 00:25:31.278 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:31.278 EAL: No free 2048 kB hugepages reported on node 1 00:25:31.278 get_feature(0x01) failed 00:25:31.278 get_feature(0x02) failed 00:25:31.278 get_feature(0x04) failed 00:25:31.278 ===================================================== 00:25:31.278 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:31.278 ===================================================== 00:25:31.278 Controller Capabilities/Features 00:25:31.278 ================================ 00:25:31.278 Vendor ID: 0000 00:25:31.278 Subsystem Vendor ID: 0000 00:25:31.278 Serial Number: 437ace50722a22030611 00:25:31.278 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:31.278 Firmware Version: 6.7.0-68 00:25:31.278 Recommended Arb Burst: 6 00:25:31.278 IEEE OUI Identifier: 00 00 00 00:25:31.278 Multi-path I/O 00:25:31.278 May have multiple subsystem ports: Yes 00:25:31.278 May have multiple controllers: Yes 00:25:31.278 Associated with SR-IOV VF: No 00:25:31.278 Max Data Transfer Size: Unlimited 00:25:31.278 Max Number of Namespaces: 1024 00:25:31.278 Max Number of I/O Queues: 128 00:25:31.278 NVMe Specification Version (VS): 1.3 00:25:31.278 NVMe Specification Version (Identify): 1.3 00:25:31.278 Maximum Queue Entries: 1024 00:25:31.278 Contiguous Queues Required: No 00:25:31.278 Arbitration Mechanisms Supported 00:25:31.278 Weighted Round Robin: Not Supported 00:25:31.278 Vendor Specific: Not Supported 00:25:31.278 Reset Timeout: 7500 ms 00:25:31.278 Doorbell Stride: 4 bytes 00:25:31.278 NVM Subsystem Reset: Not Supported 00:25:31.278 Command Sets Supported 00:25:31.278 NVM Command Set: Supported 00:25:31.278 Boot Partition: Not Supported 00:25:31.278 Memory Page Size Minimum: 4096 bytes 00:25:31.278 Memory Page Size Maximum: 4096 bytes 00:25:31.278 Persistent Memory Region: Not Supported 00:25:31.278 Optional Asynchronous Events Supported 00:25:31.278 Namespace Attribute Notices: Supported 00:25:31.278 Firmware Activation Notices: Not Supported 00:25:31.278 ANA Change Notices: Supported 00:25:31.278 PLE Aggregate Log Change Notices: Not Supported 00:25:31.278 LBA Status Info Alert Notices: Not Supported 00:25:31.278 EGE Aggregate Log Change Notices: Not Supported 00:25:31.278 Normal NVM Subsystem Shutdown event: Not Supported 00:25:31.278 Zone Descriptor Change Notices: Not Supported 00:25:31.278 Discovery Log Change Notices: Not Supported 00:25:31.278 Controller Attributes 00:25:31.278 128-bit Host Identifier: Supported 00:25:31.278 Non-Operational Permissive Mode: Not Supported 00:25:31.278 NVM Sets: Not Supported 00:25:31.278 Read Recovery Levels: Not Supported 00:25:31.278 Endurance Groups: Not Supported 00:25:31.278 Predictable Latency Mode: Not Supported 00:25:31.278 Traffic Based Keep ALive: Supported 00:25:31.278 Namespace Granularity: Not Supported 00:25:31.278 SQ Associations: Not Supported 00:25:31.278 UUID List: Not Supported 00:25:31.278 Multi-Domain Subsystem: Not Supported 00:25:31.278 Fixed Capacity Management: Not Supported 00:25:31.278 Variable Capacity Management: Not Supported 00:25:31.278 Delete Endurance Group: Not Supported 00:25:31.278 Delete NVM Set: Not Supported 00:25:31.278 Extended LBA Formats Supported: Not Supported 00:25:31.278 Flexible Data Placement Supported: Not Supported 00:25:31.278 00:25:31.278 Controller Memory Buffer Support 00:25:31.278 ================================ 00:25:31.278 Supported: No 00:25:31.278 00:25:31.278 Persistent Memory Region Support 00:25:31.278 ================================ 00:25:31.278 Supported: No 00:25:31.278 00:25:31.278 Admin Command Set Attributes 00:25:31.278 ============================ 00:25:31.278 Security Send/Receive: Not Supported 00:25:31.278 Format NVM: Not Supported 00:25:31.278 Firmware Activate/Download: Not Supported 00:25:31.278 Namespace Management: Not Supported 00:25:31.278 Device Self-Test: Not Supported 00:25:31.278 Directives: Not Supported 00:25:31.278 NVMe-MI: Not Supported 00:25:31.278 Virtualization Management: Not Supported 00:25:31.278 Doorbell Buffer Config: Not Supported 00:25:31.278 Get LBA Status Capability: Not Supported 00:25:31.278 Command & Feature Lockdown Capability: Not Supported 00:25:31.278 Abort Command Limit: 4 00:25:31.278 Async Event Request Limit: 4 00:25:31.278 Number of Firmware Slots: N/A 00:25:31.278 Firmware Slot 1 Read-Only: N/A 00:25:31.278 Firmware Activation Without Reset: N/A 00:25:31.278 Multiple Update Detection Support: N/A 00:25:31.278 Firmware Update Granularity: No Information Provided 00:25:31.278 Per-Namespace SMART Log: Yes 00:25:31.278 Asymmetric Namespace Access Log Page: Supported 00:25:31.278 ANA Transition Time : 10 sec 00:25:31.278 00:25:31.278 Asymmetric Namespace Access Capabilities 00:25:31.278 ANA Optimized State : Supported 00:25:31.278 ANA Non-Optimized State : Supported 00:25:31.278 ANA Inaccessible State : Supported 00:25:31.278 ANA Persistent Loss State : Supported 00:25:31.278 ANA Change State : Supported 00:25:31.278 ANAGRPID is not changed : No 00:25:31.278 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:31.278 00:25:31.278 ANA Group Identifier Maximum : 128 00:25:31.278 Number of ANA Group Identifiers : 128 00:25:31.278 Max Number of Allowed Namespaces : 1024 00:25:31.278 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:31.278 Command Effects Log Page: Supported 00:25:31.278 Get Log Page Extended Data: Supported 00:25:31.278 Telemetry Log Pages: Not Supported 00:25:31.278 Persistent Event Log Pages: Not Supported 00:25:31.278 Supported Log Pages Log Page: May Support 00:25:31.278 Commands Supported & Effects Log Page: Not Supported 00:25:31.278 Feature Identifiers & Effects Log Page:May Support 00:25:31.278 NVMe-MI Commands & Effects Log Page: May Support 00:25:31.278 Data Area 4 for Telemetry Log: Not Supported 00:25:31.278 Error Log Page Entries Supported: 128 00:25:31.278 Keep Alive: Supported 00:25:31.278 Keep Alive Granularity: 1000 ms 00:25:31.278 00:25:31.278 NVM Command Set Attributes 00:25:31.278 ========================== 00:25:31.278 Submission Queue Entry Size 00:25:31.278 Max: 64 00:25:31.278 Min: 64 00:25:31.278 Completion Queue Entry Size 00:25:31.278 Max: 16 00:25:31.278 Min: 16 00:25:31.278 Number of Namespaces: 1024 00:25:31.278 Compare Command: Not Supported 00:25:31.278 Write Uncorrectable Command: Not Supported 00:25:31.279 Dataset Management Command: Supported 00:25:31.279 Write Zeroes Command: Supported 00:25:31.279 Set Features Save Field: Not Supported 00:25:31.279 Reservations: Not Supported 00:25:31.279 Timestamp: Not Supported 00:25:31.279 Copy: Not Supported 00:25:31.279 Volatile Write Cache: Present 00:25:31.279 Atomic Write Unit (Normal): 1 00:25:31.279 Atomic Write Unit (PFail): 1 00:25:31.279 Atomic Compare & Write Unit: 1 00:25:31.279 Fused Compare & Write: Not Supported 00:25:31.279 Scatter-Gather List 00:25:31.279 SGL Command Set: Supported 00:25:31.279 SGL Keyed: Not Supported 00:25:31.279 SGL Bit Bucket Descriptor: Not Supported 00:25:31.279 SGL Metadata Pointer: Not Supported 00:25:31.279 Oversized SGL: Not Supported 00:25:31.279 SGL Metadata Address: Not Supported 00:25:31.279 SGL Offset: Supported 00:25:31.279 Transport SGL Data Block: Not Supported 00:25:31.279 Replay Protected Memory Block: Not Supported 00:25:31.279 00:25:31.279 Firmware Slot Information 00:25:31.279 ========================= 00:25:31.279 Active slot: 0 00:25:31.279 00:25:31.279 Asymmetric Namespace Access 00:25:31.279 =========================== 00:25:31.279 Change Count : 0 00:25:31.279 Number of ANA Group Descriptors : 1 00:25:31.279 ANA Group Descriptor : 0 00:25:31.279 ANA Group ID : 1 00:25:31.279 Number of NSID Values : 1 00:25:31.279 Change Count : 0 00:25:31.279 ANA State : 1 00:25:31.279 Namespace Identifier : 1 00:25:31.279 00:25:31.279 Commands Supported and Effects 00:25:31.279 ============================== 00:25:31.279 Admin Commands 00:25:31.279 -------------- 00:25:31.279 Get Log Page (02h): Supported 00:25:31.279 Identify (06h): Supported 00:25:31.279 Abort (08h): Supported 00:25:31.279 Set Features (09h): Supported 00:25:31.279 Get Features (0Ah): Supported 00:25:31.279 Asynchronous Event Request (0Ch): Supported 00:25:31.279 Keep Alive (18h): Supported 00:25:31.279 I/O Commands 00:25:31.279 ------------ 00:25:31.279 Flush (00h): Supported 00:25:31.279 Write (01h): Supported LBA-Change 00:25:31.279 Read (02h): Supported 00:25:31.279 Write Zeroes (08h): Supported LBA-Change 00:25:31.279 Dataset Management (09h): Supported 00:25:31.279 00:25:31.279 Error Log 00:25:31.279 ========= 00:25:31.279 Entry: 0 00:25:31.279 Error Count: 0x3 00:25:31.279 Submission Queue Id: 0x0 00:25:31.279 Command Id: 0x5 00:25:31.279 Phase Bit: 0 00:25:31.279 Status Code: 0x2 00:25:31.279 Status Code Type: 0x0 00:25:31.279 Do Not Retry: 1 00:25:31.279 Error Location: 0x28 00:25:31.279 LBA: 0x0 00:25:31.279 Namespace: 0x0 00:25:31.279 Vendor Log Page: 0x0 00:25:31.279 ----------- 00:25:31.279 Entry: 1 00:25:31.279 Error Count: 0x2 00:25:31.279 Submission Queue Id: 0x0 00:25:31.279 Command Id: 0x5 00:25:31.279 Phase Bit: 0 00:25:31.279 Status Code: 0x2 00:25:31.279 Status Code Type: 0x0 00:25:31.279 Do Not Retry: 1 00:25:31.279 Error Location: 0x28 00:25:31.279 LBA: 0x0 00:25:31.279 Namespace: 0x0 00:25:31.279 Vendor Log Page: 0x0 00:25:31.279 ----------- 00:25:31.279 Entry: 2 00:25:31.279 Error Count: 0x1 00:25:31.279 Submission Queue Id: 0x0 00:25:31.279 Command Id: 0x4 00:25:31.279 Phase Bit: 0 00:25:31.279 Status Code: 0x2 00:25:31.279 Status Code Type: 0x0 00:25:31.279 Do Not Retry: 1 00:25:31.279 Error Location: 0x28 00:25:31.279 LBA: 0x0 00:25:31.279 Namespace: 0x0 00:25:31.279 Vendor Log Page: 0x0 00:25:31.279 00:25:31.279 Number of Queues 00:25:31.279 ================ 00:25:31.279 Number of I/O Submission Queues: 128 00:25:31.279 Number of I/O Completion Queues: 128 00:25:31.279 00:25:31.279 ZNS Specific Controller Data 00:25:31.279 ============================ 00:25:31.279 Zone Append Size Limit: 0 00:25:31.279 00:25:31.279 00:25:31.279 Active Namespaces 00:25:31.279 ================= 00:25:31.279 get_feature(0x05) failed 00:25:31.279 Namespace ID:1 00:25:31.279 Command Set Identifier: NVM (00h) 00:25:31.279 Deallocate: Supported 00:25:31.279 Deallocated/Unwritten Error: Not Supported 00:25:31.279 Deallocated Read Value: Unknown 00:25:31.279 Deallocate in Write Zeroes: Not Supported 00:25:31.279 Deallocated Guard Field: 0xFFFF 00:25:31.279 Flush: Supported 00:25:31.279 Reservation: Not Supported 00:25:31.279 Namespace Sharing Capabilities: Multiple Controllers 00:25:31.279 Size (in LBAs): 3125627568 (1490GiB) 00:25:31.279 Capacity (in LBAs): 3125627568 (1490GiB) 00:25:31.279 Utilization (in LBAs): 3125627568 (1490GiB) 00:25:31.279 UUID: 87a522dd-1aa1-4797-839d-eb15f1f2a1c5 00:25:31.279 Thin Provisioning: Not Supported 00:25:31.279 Per-NS Atomic Units: Yes 00:25:31.279 Atomic Boundary Size (Normal): 0 00:25:31.279 Atomic Boundary Size (PFail): 0 00:25:31.279 Atomic Boundary Offset: 0 00:25:31.279 NGUID/EUI64 Never Reused: No 00:25:31.279 ANA group ID: 1 00:25:31.279 Namespace Write Protected: No 00:25:31.279 Number of LBA Formats: 1 00:25:31.279 Current LBA Format: LBA Format #00 00:25:31.279 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:31.279 00:25:31.279 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:31.279 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:31.279 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:25:31.279 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:31.279 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:25:31.279 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:31.279 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:31.279 rmmod nvme_tcp 00:25:31.279 rmmod nvme_fabrics 00:25:31.279 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:31.279 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:25:31.279 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:25:31.279 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:31.279 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:31.279 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:31.279 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:31.279 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:31.279 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:31.279 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.279 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:31.279 18:40:16 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.815 18:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:33.815 18:40:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:33.815 18:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:33.815 18:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:25:33.815 18:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:33.815 18:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:33.815 18:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:33.815 18:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:33.815 18:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:33.815 18:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:33.815 18:40:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:36.343 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:36.343 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:36.343 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:36.343 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:36.343 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:36.343 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:36.343 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:36.343 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:36.343 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:36.343 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:36.343 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:36.343 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:36.343 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:36.343 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:36.343 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:36.343 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:37.719 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:37.719 00:25:37.719 real 0m16.814s 00:25:37.719 user 0m4.129s 00:25:37.719 sys 0m8.430s 00:25:37.719 18:40:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:37.719 18:40:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:37.719 ************************************ 00:25:37.719 END TEST nvmf_identify_kernel_target 00:25:37.719 ************************************ 00:25:37.719 18:40:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:37.719 18:40:23 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:37.719 18:40:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:37.719 18:40:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:37.719 18:40:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:37.977 ************************************ 00:25:37.977 START TEST nvmf_auth_host 00:25:37.977 ************************************ 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:37.977 * Looking for test storage... 00:25:37.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:37.977 18:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:43.246 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:43.246 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:43.246 Found net devices under 0000:86:00.0: cvl_0_0 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:43.246 Found net devices under 0000:86:00.1: cvl_0_1 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:43.246 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:43.247 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:43.247 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:43.247 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:43.247 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:43.247 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:43.505 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:43.505 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:43.505 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:43.505 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:43.505 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:43.505 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:43.505 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:43.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:43.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:25:43.505 00:25:43.505 --- 10.0.0.2 ping statistics --- 00:25:43.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.505 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:25:43.505 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:43.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:43.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:25:43.505 00:25:43.505 --- 10.0.0.1 ping statistics --- 00:25:43.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.505 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:25:43.505 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:43.505 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:25:43.505 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:43.505 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:43.505 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:43.505 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:43.505 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:43.505 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:43.505 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:43.505 18:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:43.505 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:43.505 18:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:43.505 18:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.505 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:43.505 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=4046764 00:25:43.505 18:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 4046764 00:25:43.505 18:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 4046764 ']' 00:25:43.505 18:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:43.505 18:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:43.505 18:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:43.505 18:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:43.505 18:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8280c2d774b9b43f6150dabe6a90d649 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.eOW 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8280c2d774b9b43f6150dabe6a90d649 0 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8280c2d774b9b43f6150dabe6a90d649 0 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8280c2d774b9b43f6150dabe6a90d649 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.eOW 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.eOW 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.eOW 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e7e037f79d8c0cce27658671d37b011f9a35f06b8cca4e7fc9b018bd1cc8d516 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.sr2 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e7e037f79d8c0cce27658671d37b011f9a35f06b8cca4e7fc9b018bd1cc8d516 3 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e7e037f79d8c0cce27658671d37b011f9a35f06b8cca4e7fc9b018bd1cc8d516 3 00:25:44.442 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:44.443 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:44.443 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e7e037f79d8c0cce27658671d37b011f9a35f06b8cca4e7fc9b018bd1cc8d516 00:25:44.443 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:44.443 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:44.443 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.sr2 00:25:44.443 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.sr2 00:25:44.443 18:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.sr2 00:25:44.443 18:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:44.443 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:44.443 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:44.443 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:44.443 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:44.443 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:44.443 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:44.443 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9488dc35d84fa8d07dcc7192e4d4fb17835098e1c6893fd0 00:25:44.443 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:44.443 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.xfs 00:25:44.443 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9488dc35d84fa8d07dcc7192e4d4fb17835098e1c6893fd0 0 00:25:44.443 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9488dc35d84fa8d07dcc7192e4d4fb17835098e1c6893fd0 0 00:25:44.443 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:44.443 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:44.443 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9488dc35d84fa8d07dcc7192e4d4fb17835098e1c6893fd0 00:25:44.443 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:44.443 18:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.xfs 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.xfs 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.xfs 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c4dcbd318350ff7b1acf7d7539595e73800e80dfb6207cfb 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.86Z 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c4dcbd318350ff7b1acf7d7539595e73800e80dfb6207cfb 2 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c4dcbd318350ff7b1acf7d7539595e73800e80dfb6207cfb 2 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c4dcbd318350ff7b1acf7d7539595e73800e80dfb6207cfb 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.86Z 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.86Z 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.86Z 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9710bdfd2587f40f7ad25427e6c9568c 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.xtf 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9710bdfd2587f40f7ad25427e6c9568c 1 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9710bdfd2587f40f7ad25427e6c9568c 1 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9710bdfd2587f40f7ad25427e6c9568c 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.xtf 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.xtf 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.xtf 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:44.702 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ccea1b8a0244a9892b8f967b8e67fd78 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.FgF 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ccea1b8a0244a9892b8f967b8e67fd78 1 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ccea1b8a0244a9892b8f967b8e67fd78 1 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ccea1b8a0244a9892b8f967b8e67fd78 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.FgF 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.FgF 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.FgF 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=025a7f2ed00f3a910ddafcd3bc690144b9fecb0c2e42e890 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.o7b 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 025a7f2ed00f3a910ddafcd3bc690144b9fecb0c2e42e890 2 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 025a7f2ed00f3a910ddafcd3bc690144b9fecb0c2e42e890 2 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=025a7f2ed00f3a910ddafcd3bc690144b9fecb0c2e42e890 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:44.703 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.o7b 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.o7b 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.o7b 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2e44230509fca78b79d62a836be3786b 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.UBb 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2e44230509fca78b79d62a836be3786b 0 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2e44230509fca78b79d62a836be3786b 0 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2e44230509fca78b79d62a836be3786b 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.UBb 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.UBb 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.UBb 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=49e7911b5144b42aa3a372bc563528052a3069fd2b08bd6ea4c65220bdc03468 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Wqz 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 49e7911b5144b42aa3a372bc563528052a3069fd2b08bd6ea4c65220bdc03468 3 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 49e7911b5144b42aa3a372bc563528052a3069fd2b08bd6ea4c65220bdc03468 3 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=49e7911b5144b42aa3a372bc563528052a3069fd2b08bd6ea4c65220bdc03468 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Wqz 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Wqz 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Wqz 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 4046764 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 4046764 ']' 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:44.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:44.962 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.220 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:45.220 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:45.220 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:45.220 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.eOW 00:25:45.220 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.220 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.220 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.220 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.sr2 ]] 00:25:45.220 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sr2 00:25:45.220 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.220 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.220 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.xfs 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.86Z ]] 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.86Z 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.xtf 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.FgF ]] 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.FgF 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.o7b 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.UBb ]] 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.UBb 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Wqz 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:45.221 18:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:47.754 Waiting for block devices as requested 00:25:47.754 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:48.012 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:48.013 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:48.013 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:48.270 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:48.270 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:48.270 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:48.270 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:48.528 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:48.528 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:48.528 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:48.786 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:48.786 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:48.786 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:48.786 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:49.044 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:49.044 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:49.611 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:49.611 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:49.611 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:49.611 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:49.611 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:49.611 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:49.611 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:49.611 18:40:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:49.611 18:40:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:49.611 No valid GPT data, bailing 00:25:49.611 18:40:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:49.611 18:40:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:49.612 18:40:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:49.612 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:49.612 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:49.612 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:49.612 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:49.612 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:49.612 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:49.612 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:25:49.612 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:49.612 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:25:49.612 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:49.612 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:25:49.612 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:25:49.612 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:25:49.612 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:49.871 00:25:49.871 Discovery Log Number of Records 2, Generation counter 2 00:25:49.871 =====Discovery Log Entry 0====== 00:25:49.871 trtype: tcp 00:25:49.871 adrfam: ipv4 00:25:49.871 subtype: current discovery subsystem 00:25:49.871 treq: not specified, sq flow control disable supported 00:25:49.871 portid: 1 00:25:49.871 trsvcid: 4420 00:25:49.871 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:49.871 traddr: 10.0.0.1 00:25:49.871 eflags: none 00:25:49.871 sectype: none 00:25:49.871 =====Discovery Log Entry 1====== 00:25:49.871 trtype: tcp 00:25:49.871 adrfam: ipv4 00:25:49.871 subtype: nvme subsystem 00:25:49.871 treq: not specified, sq flow control disable supported 00:25:49.871 portid: 1 00:25:49.871 trsvcid: 4420 00:25:49.871 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:49.871 traddr: 10.0.0.1 00:25:49.871 eflags: none 00:25:49.871 sectype: none 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: ]] 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.871 nvme0n1 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.871 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.130 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.130 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI4MGMyZDc3NGI5YjQzZjYxNTBkYWJlNmE5MGQ2NDmaDH0m: 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI4MGMyZDc3NGI5YjQzZjYxNTBkYWJlNmE5MGQ2NDmaDH0m: 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: ]] 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.131 nvme0n1 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.131 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: ]] 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.391 nvme0n1 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTcxMGJkZmQyNTg3ZjQwZjdhZDI1NDI3ZTZjOTU2OGPfZHvs: 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTcxMGJkZmQyNTg3ZjQwZjdhZDI1NDI3ZTZjOTU2OGPfZHvs: 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: ]] 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.391 18:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.651 nvme0n1 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDI1YTdmMmVkMDBmM2E5MTBkZGFmY2QzYmM2OTAxNDRiOWZlY2IwYzJlNDJlODkwtZnWhA==: 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDI1YTdmMmVkMDBmM2E5MTBkZGFmY2QzYmM2OTAxNDRiOWZlY2IwYzJlNDJlODkwtZnWhA==: 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: ]] 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.651 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.910 nvme0n1 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDllNzkxMWI1MTQ0YjQyYWEzYTM3MmJjNTYzNTI4MDUyYTMwNjlmZDJiMDhiZDZlYTRjNjUyMjBiZGMwMzQ2OH4pDeA=: 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDllNzkxMWI1MTQ0YjQyYWEzYTM3MmJjNTYzNTI4MDUyYTMwNjlmZDJiMDhiZDZlYTRjNjUyMjBiZGMwMzQ2OH4pDeA=: 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.910 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.169 nvme0n1 00:25:51.169 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.169 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.169 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.169 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.169 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.169 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.169 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.169 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.169 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.169 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.169 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.169 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:51.169 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.169 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:51.169 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.169 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:51.169 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:51.169 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:51.169 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI4MGMyZDc3NGI5YjQzZjYxNTBkYWJlNmE5MGQ2NDmaDH0m: 00:25:51.169 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: 00:25:51.169 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:51.169 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:51.169 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI4MGMyZDc3NGI5YjQzZjYxNTBkYWJlNmE5MGQ2NDmaDH0m: 00:25:51.169 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: ]] 00:25:51.169 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: 00:25:51.169 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:51.169 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.169 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:51.169 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:51.169 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:51.169 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.170 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:51.170 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.170 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.170 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.170 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.170 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.170 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.170 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.170 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.170 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.170 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:51.170 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.170 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:51.170 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:51.170 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:51.170 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:51.170 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.170 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.429 nvme0n1 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: ]] 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.429 18:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.688 nvme0n1 00:25:51.688 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.688 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.688 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.688 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.688 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.688 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.688 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.688 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.688 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.688 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.688 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.688 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.688 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:51.688 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.688 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:51.688 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:51.689 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:51.689 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTcxMGJkZmQyNTg3ZjQwZjdhZDI1NDI3ZTZjOTU2OGPfZHvs: 00:25:51.689 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: 00:25:51.689 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:51.689 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:51.689 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTcxMGJkZmQyNTg3ZjQwZjdhZDI1NDI3ZTZjOTU2OGPfZHvs: 00:25:51.689 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: ]] 00:25:51.689 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: 00:25:51.689 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:51.689 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.689 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:51.689 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:51.689 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:51.689 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.689 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:51.689 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.689 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.689 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.689 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.689 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.689 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.689 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.689 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.689 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.689 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:51.689 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.689 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:51.689 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:51.689 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:51.689 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:51.689 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.689 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.948 nvme0n1 00:25:51.948 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.948 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.948 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.948 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.948 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.948 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.948 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.948 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDI1YTdmMmVkMDBmM2E5MTBkZGFmY2QzYmM2OTAxNDRiOWZlY2IwYzJlNDJlODkwtZnWhA==: 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDI1YTdmMmVkMDBmM2E5MTBkZGFmY2QzYmM2OTAxNDRiOWZlY2IwYzJlNDJlODkwtZnWhA==: 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: ]] 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.949 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.208 nvme0n1 00:25:52.208 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.208 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.208 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.208 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.208 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.208 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.208 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.208 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.208 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.208 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.208 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.208 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.208 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:52.208 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.208 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:52.208 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:52.208 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:52.208 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDllNzkxMWI1MTQ0YjQyYWEzYTM3MmJjNTYzNTI4MDUyYTMwNjlmZDJiMDhiZDZlYTRjNjUyMjBiZGMwMzQ2OH4pDeA=: 00:25:52.208 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:52.208 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:52.209 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:52.209 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDllNzkxMWI1MTQ0YjQyYWEzYTM3MmJjNTYzNTI4MDUyYTMwNjlmZDJiMDhiZDZlYTRjNjUyMjBiZGMwMzQ2OH4pDeA=: 00:25:52.209 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:52.209 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:52.209 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.209 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:52.209 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:52.209 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:52.209 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.209 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:52.209 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.209 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.209 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.209 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.209 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:52.209 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:52.209 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:52.209 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.209 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.209 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:52.209 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.209 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:52.209 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:52.209 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:52.209 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:52.209 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.209 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.468 nvme0n1 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI4MGMyZDc3NGI5YjQzZjYxNTBkYWJlNmE5MGQ2NDmaDH0m: 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI4MGMyZDc3NGI5YjQzZjYxNTBkYWJlNmE5MGQ2NDmaDH0m: 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: ]] 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.468 18:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.738 nvme0n1 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: ]] 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.738 18:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:52.739 18:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:52.739 18:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:52.739 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:52.739 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.739 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.078 nvme0n1 00:25:53.078 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.078 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.078 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.078 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTcxMGJkZmQyNTg3ZjQwZjdhZDI1NDI3ZTZjOTU2OGPfZHvs: 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTcxMGJkZmQyNTg3ZjQwZjdhZDI1NDI3ZTZjOTU2OGPfZHvs: 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: ]] 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.079 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.339 nvme0n1 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDI1YTdmMmVkMDBmM2E5MTBkZGFmY2QzYmM2OTAxNDRiOWZlY2IwYzJlNDJlODkwtZnWhA==: 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDI1YTdmMmVkMDBmM2E5MTBkZGFmY2QzYmM2OTAxNDRiOWZlY2IwYzJlNDJlODkwtZnWhA==: 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: ]] 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.339 18:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.599 nvme0n1 00:25:53.599 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.599 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.599 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.599 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.599 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.599 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.599 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.599 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.599 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.599 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.599 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.599 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.599 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:53.599 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.599 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:53.599 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:53.599 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:53.599 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDllNzkxMWI1MTQ0YjQyYWEzYTM3MmJjNTYzNTI4MDUyYTMwNjlmZDJiMDhiZDZlYTRjNjUyMjBiZGMwMzQ2OH4pDeA=: 00:25:53.599 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:53.599 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:53.599 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:53.599 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDllNzkxMWI1MTQ0YjQyYWEzYTM3MmJjNTYzNTI4MDUyYTMwNjlmZDJiMDhiZDZlYTRjNjUyMjBiZGMwMzQ2OH4pDeA=: 00:25:53.599 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:53.599 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:53.599 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.599 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:53.599 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:53.599 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:53.599 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.599 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:53.599 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.599 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.858 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.858 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.858 18:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:53.858 18:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:53.858 18:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:53.858 18:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.858 18:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.858 18:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:53.858 18:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.858 18:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:53.858 18:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:53.858 18:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:53.858 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:53.858 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.858 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.858 nvme0n1 00:25:53.858 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.858 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.858 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.858 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.858 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI4MGMyZDc3NGI5YjQzZjYxNTBkYWJlNmE5MGQ2NDmaDH0m: 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI4MGMyZDc3NGI5YjQzZjYxNTBkYWJlNmE5MGQ2NDmaDH0m: 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: ]] 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.118 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.377 nvme0n1 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: ]] 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.377 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.636 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.636 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.636 18:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.636 18:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.636 18:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.636 18:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.636 18:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.636 18:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:54.636 18:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.636 18:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:54.636 18:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:54.636 18:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:54.636 18:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:54.636 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.636 18:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.895 nvme0n1 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTcxMGJkZmQyNTg3ZjQwZjdhZDI1NDI3ZTZjOTU2OGPfZHvs: 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTcxMGJkZmQyNTg3ZjQwZjdhZDI1NDI3ZTZjOTU2OGPfZHvs: 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: ]] 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.895 18:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.461 nvme0n1 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDI1YTdmMmVkMDBmM2E5MTBkZGFmY2QzYmM2OTAxNDRiOWZlY2IwYzJlNDJlODkwtZnWhA==: 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDI1YTdmMmVkMDBmM2E5MTBkZGFmY2QzYmM2OTAxNDRiOWZlY2IwYzJlNDJlODkwtZnWhA==: 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: ]] 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.461 18:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.719 nvme0n1 00:25:55.719 18:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.719 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.719 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.719 18:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.719 18:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.719 18:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.719 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.719 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.719 18:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.719 18:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.719 18:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.719 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.719 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:55.719 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.719 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:55.719 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:55.719 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:55.719 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDllNzkxMWI1MTQ0YjQyYWEzYTM3MmJjNTYzNTI4MDUyYTMwNjlmZDJiMDhiZDZlYTRjNjUyMjBiZGMwMzQ2OH4pDeA=: 00:25:55.719 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:55.719 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:55.719 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:55.719 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDllNzkxMWI1MTQ0YjQyYWEzYTM3MmJjNTYzNTI4MDUyYTMwNjlmZDJiMDhiZDZlYTRjNjUyMjBiZGMwMzQ2OH4pDeA=: 00:25:55.719 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:55.719 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:55.719 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.719 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:55.720 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:55.720 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:55.720 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.720 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:55.720 18:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.720 18:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.977 18:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.977 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.977 18:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.977 18:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.977 18:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.977 18:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.977 18:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.977 18:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:55.977 18:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.977 18:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:55.977 18:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:55.977 18:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:55.977 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:55.977 18:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.977 18:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.235 nvme0n1 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI4MGMyZDc3NGI5YjQzZjYxNTBkYWJlNmE5MGQ2NDmaDH0m: 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI4MGMyZDc3NGI5YjQzZjYxNTBkYWJlNmE5MGQ2NDmaDH0m: 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: ]] 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.235 18:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.236 18:40:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:56.236 18:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.236 18:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.801 nvme0n1 00:25:56.801 18:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.801 18:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.801 18:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.801 18:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.801 18:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.801 18:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.058 18:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.058 18:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.058 18:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.058 18:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.058 18:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.058 18:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.058 18:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:57.058 18:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.058 18:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:57.058 18:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:57.058 18:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:57.059 18:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:25:57.059 18:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:25:57.059 18:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:57.059 18:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:57.059 18:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:25:57.059 18:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: ]] 00:25:57.059 18:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:25:57.059 18:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:57.059 18:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.059 18:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:57.059 18:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:57.059 18:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:57.059 18:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.059 18:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:57.059 18:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.059 18:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.059 18:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.059 18:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.059 18:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.059 18:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.059 18:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.059 18:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.059 18:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.059 18:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.059 18:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.059 18:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.059 18:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.059 18:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.059 18:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:57.059 18:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.059 18:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.625 nvme0n1 00:25:57.625 18:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.625 18:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.625 18:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.625 18:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.625 18:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.625 18:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTcxMGJkZmQyNTg3ZjQwZjdhZDI1NDI3ZTZjOTU2OGPfZHvs: 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTcxMGJkZmQyNTg3ZjQwZjdhZDI1NDI3ZTZjOTU2OGPfZHvs: 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: ]] 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.625 18:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.197 nvme0n1 00:25:58.197 18:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.197 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.197 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDI1YTdmMmVkMDBmM2E5MTBkZGFmY2QzYmM2OTAxNDRiOWZlY2IwYzJlNDJlODkwtZnWhA==: 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDI1YTdmMmVkMDBmM2E5MTBkZGFmY2QzYmM2OTAxNDRiOWZlY2IwYzJlNDJlODkwtZnWhA==: 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: ]] 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.198 18:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.767 nvme0n1 00:25:58.767 18:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.767 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.767 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.767 18:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.767 18:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.767 18:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.767 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.767 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.767 18:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.767 18:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDllNzkxMWI1MTQ0YjQyYWEzYTM3MmJjNTYzNTI4MDUyYTMwNjlmZDJiMDhiZDZlYTRjNjUyMjBiZGMwMzQ2OH4pDeA=: 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDllNzkxMWI1MTQ0YjQyYWEzYTM3MmJjNTYzNTI4MDUyYTMwNjlmZDJiMDhiZDZlYTRjNjUyMjBiZGMwMzQ2OH4pDeA=: 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.026 18:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.594 nvme0n1 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI4MGMyZDc3NGI5YjQzZjYxNTBkYWJlNmE5MGQ2NDmaDH0m: 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI4MGMyZDc3NGI5YjQzZjYxNTBkYWJlNmE5MGQ2NDmaDH0m: 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: ]] 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.595 18:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.595 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.595 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.595 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.595 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.595 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.595 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.595 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.595 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:59.595 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.595 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:59.595 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:59.595 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:59.595 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:59.595 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.595 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.855 nvme0n1 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: ]] 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.855 nvme0n1 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.855 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTcxMGJkZmQyNTg3ZjQwZjdhZDI1NDI3ZTZjOTU2OGPfZHvs: 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTcxMGJkZmQyNTg3ZjQwZjdhZDI1NDI3ZTZjOTU2OGPfZHvs: 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: ]] 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.115 nvme0n1 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:00.115 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDI1YTdmMmVkMDBmM2E5MTBkZGFmY2QzYmM2OTAxNDRiOWZlY2IwYzJlNDJlODkwtZnWhA==: 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDI1YTdmMmVkMDBmM2E5MTBkZGFmY2QzYmM2OTAxNDRiOWZlY2IwYzJlNDJlODkwtZnWhA==: 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: ]] 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.374 nvme0n1 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDllNzkxMWI1MTQ0YjQyYWEzYTM3MmJjNTYzNTI4MDUyYTMwNjlmZDJiMDhiZDZlYTRjNjUyMjBiZGMwMzQ2OH4pDeA=: 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDllNzkxMWI1MTQ0YjQyYWEzYTM3MmJjNTYzNTI4MDUyYTMwNjlmZDJiMDhiZDZlYTRjNjUyMjBiZGMwMzQ2OH4pDeA=: 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:00.374 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.375 18:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.633 nvme0n1 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI4MGMyZDc3NGI5YjQzZjYxNTBkYWJlNmE5MGQ2NDmaDH0m: 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI4MGMyZDc3NGI5YjQzZjYxNTBkYWJlNmE5MGQ2NDmaDH0m: 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: ]] 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.633 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.634 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.634 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.634 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.634 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.634 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.634 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.634 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.634 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:00.634 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.634 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:00.634 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:00.634 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:00.634 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:00.634 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.634 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.893 nvme0n1 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: ]] 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.893 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.152 nvme0n1 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTcxMGJkZmQyNTg3ZjQwZjdhZDI1NDI3ZTZjOTU2OGPfZHvs: 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTcxMGJkZmQyNTg3ZjQwZjdhZDI1NDI3ZTZjOTU2OGPfZHvs: 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: ]] 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.153 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.412 nvme0n1 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDI1YTdmMmVkMDBmM2E5MTBkZGFmY2QzYmM2OTAxNDRiOWZlY2IwYzJlNDJlODkwtZnWhA==: 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDI1YTdmMmVkMDBmM2E5MTBkZGFmY2QzYmM2OTAxNDRiOWZlY2IwYzJlNDJlODkwtZnWhA==: 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: ]] 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.412 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.413 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.413 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.413 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.413 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.413 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:01.413 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.413 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:01.413 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:01.413 18:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:01.413 18:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:01.413 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.413 18:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.672 nvme0n1 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDllNzkxMWI1MTQ0YjQyYWEzYTM3MmJjNTYzNTI4MDUyYTMwNjlmZDJiMDhiZDZlYTRjNjUyMjBiZGMwMzQ2OH4pDeA=: 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDllNzkxMWI1MTQ0YjQyYWEzYTM3MmJjNTYzNTI4MDUyYTMwNjlmZDJiMDhiZDZlYTRjNjUyMjBiZGMwMzQ2OH4pDeA=: 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.672 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.931 nvme0n1 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI4MGMyZDc3NGI5YjQzZjYxNTBkYWJlNmE5MGQ2NDmaDH0m: 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI4MGMyZDc3NGI5YjQzZjYxNTBkYWJlNmE5MGQ2NDmaDH0m: 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: ]] 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.931 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.190 nvme0n1 00:26:02.190 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.190 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.190 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.190 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.190 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.190 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.190 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.190 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.190 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.190 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.190 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.190 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.190 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:02.190 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.190 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:02.190 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:02.190 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:02.448 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:26:02.448 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:26:02.448 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:02.448 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:02.448 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:26:02.448 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: ]] 00:26:02.448 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:26:02.448 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:02.448 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.448 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:02.448 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:02.448 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:02.448 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.448 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:02.448 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.448 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.448 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.448 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.448 18:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.448 18:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.448 18:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.448 18:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.448 18:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.448 18:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:02.448 18:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.448 18:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:02.448 18:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:02.448 18:40:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:02.448 18:40:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:02.448 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.448 18:40:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.707 nvme0n1 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTcxMGJkZmQyNTg3ZjQwZjdhZDI1NDI3ZTZjOTU2OGPfZHvs: 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTcxMGJkZmQyNTg3ZjQwZjdhZDI1NDI3ZTZjOTU2OGPfZHvs: 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: ]] 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.707 18:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.708 18:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:02.708 18:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.708 18:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:02.708 18:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:02.708 18:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:02.708 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:02.708 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.708 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.967 nvme0n1 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDI1YTdmMmVkMDBmM2E5MTBkZGFmY2QzYmM2OTAxNDRiOWZlY2IwYzJlNDJlODkwtZnWhA==: 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDI1YTdmMmVkMDBmM2E5MTBkZGFmY2QzYmM2OTAxNDRiOWZlY2IwYzJlNDJlODkwtZnWhA==: 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: ]] 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.967 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.226 nvme0n1 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDllNzkxMWI1MTQ0YjQyYWEzYTM3MmJjNTYzNTI4MDUyYTMwNjlmZDJiMDhiZDZlYTRjNjUyMjBiZGMwMzQ2OH4pDeA=: 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDllNzkxMWI1MTQ0YjQyYWEzYTM3MmJjNTYzNTI4MDUyYTMwNjlmZDJiMDhiZDZlYTRjNjUyMjBiZGMwMzQ2OH4pDeA=: 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.226 18:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.485 nvme0n1 00:26:03.485 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.485 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.485 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.485 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.485 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.485 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.744 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.744 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.744 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.744 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.744 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.744 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:03.744 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.744 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:03.744 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.744 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:03.744 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:03.744 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:03.744 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI4MGMyZDc3NGI5YjQzZjYxNTBkYWJlNmE5MGQ2NDmaDH0m: 00:26:03.744 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: 00:26:03.744 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:03.744 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:03.744 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI4MGMyZDc3NGI5YjQzZjYxNTBkYWJlNmE5MGQ2NDmaDH0m: 00:26:03.744 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: ]] 00:26:03.744 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: 00:26:03.744 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:03.744 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.744 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:03.744 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:03.744 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:03.744 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.744 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:03.744 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.745 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.745 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.745 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.745 18:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.745 18:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.745 18:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.745 18:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.745 18:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.745 18:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:03.745 18:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.745 18:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:03.745 18:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:03.745 18:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:03.745 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:03.745 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.745 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.003 nvme0n1 00:26:04.003 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: ]] 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.004 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.572 nvme0n1 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTcxMGJkZmQyNTg3ZjQwZjdhZDI1NDI3ZTZjOTU2OGPfZHvs: 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTcxMGJkZmQyNTg3ZjQwZjdhZDI1NDI3ZTZjOTU2OGPfZHvs: 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: ]] 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.572 18:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.830 nvme0n1 00:26:04.830 18:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.830 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.830 18:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.830 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.830 18:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.830 18:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDI1YTdmMmVkMDBmM2E5MTBkZGFmY2QzYmM2OTAxNDRiOWZlY2IwYzJlNDJlODkwtZnWhA==: 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDI1YTdmMmVkMDBmM2E5MTBkZGFmY2QzYmM2OTAxNDRiOWZlY2IwYzJlNDJlODkwtZnWhA==: 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: ]] 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.090 18:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.091 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:05.091 18:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.091 18:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.350 nvme0n1 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDllNzkxMWI1MTQ0YjQyYWEzYTM3MmJjNTYzNTI4MDUyYTMwNjlmZDJiMDhiZDZlYTRjNjUyMjBiZGMwMzQ2OH4pDeA=: 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDllNzkxMWI1MTQ0YjQyYWEzYTM3MmJjNTYzNTI4MDUyYTMwNjlmZDJiMDhiZDZlYTRjNjUyMjBiZGMwMzQ2OH4pDeA=: 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.350 18:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.351 18:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.351 18:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.351 18:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.351 18:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.351 18:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.351 18:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.351 18:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.351 18:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:05.351 18:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.351 18:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.918 nvme0n1 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI4MGMyZDc3NGI5YjQzZjYxNTBkYWJlNmE5MGQ2NDmaDH0m: 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI4MGMyZDc3NGI5YjQzZjYxNTBkYWJlNmE5MGQ2NDmaDH0m: 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: ]] 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.918 18:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.487 nvme0n1 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: ]] 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.487 18:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.055 nvme0n1 00:26:07.055 18:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.055 18:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.055 18:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.055 18:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.055 18:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.055 18:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.055 18:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.055 18:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.055 18:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.055 18:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.055 18:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.055 18:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.055 18:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTcxMGJkZmQyNTg3ZjQwZjdhZDI1NDI3ZTZjOTU2OGPfZHvs: 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTcxMGJkZmQyNTg3ZjQwZjdhZDI1NDI3ZTZjOTU2OGPfZHvs: 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: ]] 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.056 18:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:07.314 18:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.314 18:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.881 nvme0n1 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDI1YTdmMmVkMDBmM2E5MTBkZGFmY2QzYmM2OTAxNDRiOWZlY2IwYzJlNDJlODkwtZnWhA==: 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDI1YTdmMmVkMDBmM2E5MTBkZGFmY2QzYmM2OTAxNDRiOWZlY2IwYzJlNDJlODkwtZnWhA==: 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: ]] 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.881 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:07.882 18:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.882 18:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.449 nvme0n1 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDllNzkxMWI1MTQ0YjQyYWEzYTM3MmJjNTYzNTI4MDUyYTMwNjlmZDJiMDhiZDZlYTRjNjUyMjBiZGMwMzQ2OH4pDeA=: 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDllNzkxMWI1MTQ0YjQyYWEzYTM3MmJjNTYzNTI4MDUyYTMwNjlmZDJiMDhiZDZlYTRjNjUyMjBiZGMwMzQ2OH4pDeA=: 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.449 18:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.016 nvme0n1 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI4MGMyZDc3NGI5YjQzZjYxNTBkYWJlNmE5MGQ2NDmaDH0m: 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI4MGMyZDc3NGI5YjQzZjYxNTBkYWJlNmE5MGQ2NDmaDH0m: 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: ]] 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.016 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.275 nvme0n1 00:26:09.275 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.275 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.275 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.275 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.275 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.275 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.275 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.275 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.275 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.275 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.275 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.275 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.275 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:09.275 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.275 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:09.275 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:09.275 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:09.275 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:26:09.275 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:26:09.275 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:09.275 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:09.275 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:26:09.275 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: ]] 00:26:09.275 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:26:09.275 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:09.275 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.275 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:09.276 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:09.276 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:09.276 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.276 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:09.276 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.276 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.276 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.276 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.276 18:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.276 18:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.276 18:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.276 18:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.276 18:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.276 18:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.276 18:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.276 18:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.276 18:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.276 18:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.276 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:09.276 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.276 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.535 nvme0n1 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTcxMGJkZmQyNTg3ZjQwZjdhZDI1NDI3ZTZjOTU2OGPfZHvs: 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTcxMGJkZmQyNTg3ZjQwZjdhZDI1NDI3ZTZjOTU2OGPfZHvs: 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: ]] 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.535 18:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.794 nvme0n1 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDI1YTdmMmVkMDBmM2E5MTBkZGFmY2QzYmM2OTAxNDRiOWZlY2IwYzJlNDJlODkwtZnWhA==: 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDI1YTdmMmVkMDBmM2E5MTBkZGFmY2QzYmM2OTAxNDRiOWZlY2IwYzJlNDJlODkwtZnWhA==: 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: ]] 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.794 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.795 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.795 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.795 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.795 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.795 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.795 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.795 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.795 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.795 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.795 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:09.795 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.795 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.054 nvme0n1 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDllNzkxMWI1MTQ0YjQyYWEzYTM3MmJjNTYzNTI4MDUyYTMwNjlmZDJiMDhiZDZlYTRjNjUyMjBiZGMwMzQ2OH4pDeA=: 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDllNzkxMWI1MTQ0YjQyYWEzYTM3MmJjNTYzNTI4MDUyYTMwNjlmZDJiMDhiZDZlYTRjNjUyMjBiZGMwMzQ2OH4pDeA=: 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.054 nvme0n1 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.054 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI4MGMyZDc3NGI5YjQzZjYxNTBkYWJlNmE5MGQ2NDmaDH0m: 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI4MGMyZDc3NGI5YjQzZjYxNTBkYWJlNmE5MGQ2NDmaDH0m: 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: ]] 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.313 nvme0n1 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.313 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.572 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.572 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.572 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.572 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.572 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.572 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.572 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:10.572 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.572 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:10.572 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:10.572 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:10.572 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:26:10.572 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:26:10.572 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:10.572 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:10.572 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:26:10.572 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: ]] 00:26:10.572 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:26:10.572 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:10.572 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.572 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:10.572 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:10.572 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:10.572 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.572 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:10.572 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.572 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.573 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.573 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.573 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.573 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.573 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.573 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.573 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.573 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.573 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.573 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.573 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.573 18:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.573 18:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:10.573 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.573 18:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.573 nvme0n1 00:26:10.573 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.573 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.573 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.573 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.573 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.573 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.831 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.831 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.831 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.831 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.831 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.831 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.831 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:10.831 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.831 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTcxMGJkZmQyNTg3ZjQwZjdhZDI1NDI3ZTZjOTU2OGPfZHvs: 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTcxMGJkZmQyNTg3ZjQwZjdhZDI1NDI3ZTZjOTU2OGPfZHvs: 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: ]] 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.832 nvme0n1 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.832 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDI1YTdmMmVkMDBmM2E5MTBkZGFmY2QzYmM2OTAxNDRiOWZlY2IwYzJlNDJlODkwtZnWhA==: 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDI1YTdmMmVkMDBmM2E5MTBkZGFmY2QzYmM2OTAxNDRiOWZlY2IwYzJlNDJlODkwtZnWhA==: 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: ]] 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.091 nvme0n1 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.091 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDllNzkxMWI1MTQ0YjQyYWEzYTM3MmJjNTYzNTI4MDUyYTMwNjlmZDJiMDhiZDZlYTRjNjUyMjBiZGMwMzQ2OH4pDeA=: 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDllNzkxMWI1MTQ0YjQyYWEzYTM3MmJjNTYzNTI4MDUyYTMwNjlmZDJiMDhiZDZlYTRjNjUyMjBiZGMwMzQ2OH4pDeA=: 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.351 nvme0n1 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.351 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.610 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.610 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:11.610 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.610 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:11.610 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.610 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:11.610 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:11.610 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:11.610 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI4MGMyZDc3NGI5YjQzZjYxNTBkYWJlNmE5MGQ2NDmaDH0m: 00:26:11.610 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: 00:26:11.610 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:11.610 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:11.610 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI4MGMyZDc3NGI5YjQzZjYxNTBkYWJlNmE5MGQ2NDmaDH0m: 00:26:11.610 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: ]] 00:26:11.610 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: 00:26:11.610 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:11.610 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.610 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:11.610 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:11.610 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:11.610 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.610 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:11.610 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.610 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.610 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.610 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.610 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.610 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.610 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.610 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.610 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.611 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:11.611 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.611 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:11.611 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:11.611 18:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:11.611 18:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:11.611 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.611 18:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.870 nvme0n1 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: ]] 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.870 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.129 nvme0n1 00:26:12.129 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.129 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.129 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.129 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.129 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.129 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.129 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.129 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.129 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.129 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.129 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.129 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.129 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:12.129 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.129 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:12.129 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:12.129 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:12.129 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTcxMGJkZmQyNTg3ZjQwZjdhZDI1NDI3ZTZjOTU2OGPfZHvs: 00:26:12.129 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: 00:26:12.129 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:12.129 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:12.129 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTcxMGJkZmQyNTg3ZjQwZjdhZDI1NDI3ZTZjOTU2OGPfZHvs: 00:26:12.129 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: ]] 00:26:12.130 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: 00:26:12.130 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:12.130 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.130 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:12.130 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:12.130 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:12.130 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.130 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:12.130 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.130 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.130 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.130 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.130 18:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.130 18:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.130 18:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.130 18:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.130 18:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.130 18:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.130 18:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.130 18:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.130 18:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.130 18:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.130 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:12.130 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.130 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.389 nvme0n1 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDI1YTdmMmVkMDBmM2E5MTBkZGFmY2QzYmM2OTAxNDRiOWZlY2IwYzJlNDJlODkwtZnWhA==: 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDI1YTdmMmVkMDBmM2E5MTBkZGFmY2QzYmM2OTAxNDRiOWZlY2IwYzJlNDJlODkwtZnWhA==: 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: ]] 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.389 18:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.648 nvme0n1 00:26:12.648 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.648 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.648 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.648 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.648 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.648 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.648 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.648 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.648 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.648 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.648 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.648 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.648 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:12.648 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.648 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:12.648 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:12.648 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:12.648 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDllNzkxMWI1MTQ0YjQyYWEzYTM3MmJjNTYzNTI4MDUyYTMwNjlmZDJiMDhiZDZlYTRjNjUyMjBiZGMwMzQ2OH4pDeA=: 00:26:12.648 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:12.648 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:12.648 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:12.648 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDllNzkxMWI1MTQ0YjQyYWEzYTM3MmJjNTYzNTI4MDUyYTMwNjlmZDJiMDhiZDZlYTRjNjUyMjBiZGMwMzQ2OH4pDeA=: 00:26:12.649 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:12.649 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:12.649 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.649 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:12.649 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:12.649 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:12.649 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.649 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:12.649 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.649 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.907 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.907 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.907 18:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.907 18:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.907 18:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.907 18:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.907 18:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.907 18:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.907 18:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.907 18:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.907 18:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.907 18:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.907 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:12.907 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.907 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.907 nvme0n1 00:26:12.907 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.907 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.907 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.907 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI4MGMyZDc3NGI5YjQzZjYxNTBkYWJlNmE5MGQ2NDmaDH0m: 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI4MGMyZDc3NGI5YjQzZjYxNTBkYWJlNmE5MGQ2NDmaDH0m: 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: ]] 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.166 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.425 nvme0n1 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: ]] 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.425 18:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.021 nvme0n1 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTcxMGJkZmQyNTg3ZjQwZjdhZDI1NDI3ZTZjOTU2OGPfZHvs: 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTcxMGJkZmQyNTg3ZjQwZjdhZDI1NDI3ZTZjOTU2OGPfZHvs: 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: ]] 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.021 18:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.280 nvme0n1 00:26:14.280 18:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.280 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.280 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.280 18:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.280 18:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.280 18:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDI1YTdmMmVkMDBmM2E5MTBkZGFmY2QzYmM2OTAxNDRiOWZlY2IwYzJlNDJlODkwtZnWhA==: 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDI1YTdmMmVkMDBmM2E5MTBkZGFmY2QzYmM2OTAxNDRiOWZlY2IwYzJlNDJlODkwtZnWhA==: 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: ]] 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.538 18:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.795 nvme0n1 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDllNzkxMWI1MTQ0YjQyYWEzYTM3MmJjNTYzNTI4MDUyYTMwNjlmZDJiMDhiZDZlYTRjNjUyMjBiZGMwMzQ2OH4pDeA=: 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDllNzkxMWI1MTQ0YjQyYWEzYTM3MmJjNTYzNTI4MDUyYTMwNjlmZDJiMDhiZDZlYTRjNjUyMjBiZGMwMzQ2OH4pDeA=: 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.795 18:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.796 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:14.796 18:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.796 18:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.360 nvme0n1 00:26:15.360 18:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.360 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.360 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.360 18:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.360 18:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.360 18:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.360 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.360 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.360 18:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.360 18:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.360 18:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.360 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:15.360 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.360 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:15.360 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.360 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:15.360 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:15.360 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:15.360 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI4MGMyZDc3NGI5YjQzZjYxNTBkYWJlNmE5MGQ2NDmaDH0m: 00:26:15.360 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: 00:26:15.360 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:15.361 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:15.361 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI4MGMyZDc3NGI5YjQzZjYxNTBkYWJlNmE5MGQ2NDmaDH0m: 00:26:15.361 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: ]] 00:26:15.361 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdlMDM3Zjc5ZDhjMGNjZTI3NjU4NjcxZDM3YjAxMWY5YTM1ZjA2YjhjY2E0ZTdmYzliMDE4YmQxY2M4ZDUxNrPQIkU=: 00:26:15.361 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:15.361 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.361 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:15.361 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:15.361 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:15.361 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.361 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:15.361 18:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.361 18:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.361 18:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.361 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.361 18:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.361 18:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.361 18:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.361 18:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.361 18:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.361 18:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.361 18:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.361 18:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.361 18:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.361 18:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.361 18:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:15.361 18:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.361 18:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.926 nvme0n1 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: ]] 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.926 18:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.927 18:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:15.927 18:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.927 18:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.494 nvme0n1 00:26:16.494 18:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.494 18:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.494 18:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.494 18:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.494 18:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.494 18:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.494 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.494 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.494 18:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.494 18:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTcxMGJkZmQyNTg3ZjQwZjdhZDI1NDI3ZTZjOTU2OGPfZHvs: 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTcxMGJkZmQyNTg3ZjQwZjdhZDI1NDI3ZTZjOTU2OGPfZHvs: 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: ]] 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2NlYTFiOGEwMjQ0YTk4OTJiOGY5NjdiOGU2N2ZkNziJKyu/: 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.751 18:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.317 nvme0n1 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDI1YTdmMmVkMDBmM2E5MTBkZGFmY2QzYmM2OTAxNDRiOWZlY2IwYzJlNDJlODkwtZnWhA==: 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDI1YTdmMmVkMDBmM2E5MTBkZGFmY2QzYmM2OTAxNDRiOWZlY2IwYzJlNDJlODkwtZnWhA==: 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: ]] 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmU0NDIzMDUwOWZjYTc4Yjc5ZDYyYTgzNmJlMzc4NmLQxCx6: 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.317 18:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.884 nvme0n1 00:26:17.884 18:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.884 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.884 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.884 18:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.884 18:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.884 18:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.884 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.884 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.884 18:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.884 18:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDllNzkxMWI1MTQ0YjQyYWEzYTM3MmJjNTYzNTI4MDUyYTMwNjlmZDJiMDhiZDZlYTRjNjUyMjBiZGMwMzQ2OH4pDeA=: 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDllNzkxMWI1MTQ0YjQyYWEzYTM3MmJjNTYzNTI4MDUyYTMwNjlmZDJiMDhiZDZlYTRjNjUyMjBiZGMwMzQ2OH4pDeA=: 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.885 18:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.450 nvme0n1 00:26:18.450 18:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.450 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.450 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.450 18:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.450 18:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.450 18:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.450 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.450 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.450 18:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.450 18:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.450 18:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.450 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:18.450 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.450 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:18.450 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:18.450 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:18.450 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:26:18.450 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:26:18.450 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:18.450 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:18.450 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ4OGRjMzVkODRmYThkMDdkY2M3MTkyZTRkNGZiMTc4MzUwOThlMWM2ODkzZmQwU1VVhg==: 00:26:18.450 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: ]] 00:26:18.450 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzRkY2JkMzE4MzUwZmY3YjFhY2Y3ZDc1Mzk1OTVlNzM4MDBlODBkZmI2MjA3Y2ZiKzyiTg==: 00:26:18.450 18:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:18.450 18:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.450 18:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.450 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.450 18:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:18.450 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.450 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.450 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.450 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.450 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.450 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.450 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.450 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.708 request: 00:26:18.708 { 00:26:18.708 "name": "nvme0", 00:26:18.708 "trtype": "tcp", 00:26:18.708 "traddr": "10.0.0.1", 00:26:18.708 "adrfam": "ipv4", 00:26:18.708 "trsvcid": "4420", 00:26:18.708 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:18.708 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:18.708 "prchk_reftag": false, 00:26:18.708 "prchk_guard": false, 00:26:18.708 "hdgst": false, 00:26:18.708 "ddgst": false, 00:26:18.708 "method": "bdev_nvme_attach_controller", 00:26:18.708 "req_id": 1 00:26:18.708 } 00:26:18.708 Got JSON-RPC error response 00:26:18.708 response: 00:26:18.708 { 00:26:18.708 "code": -5, 00:26:18.708 "message": "Input/output error" 00:26:18.708 } 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.708 request: 00:26:18.708 { 00:26:18.708 "name": "nvme0", 00:26:18.708 "trtype": "tcp", 00:26:18.708 "traddr": "10.0.0.1", 00:26:18.708 "adrfam": "ipv4", 00:26:18.708 "trsvcid": "4420", 00:26:18.708 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:18.708 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:18.708 "prchk_reftag": false, 00:26:18.708 "prchk_guard": false, 00:26:18.708 "hdgst": false, 00:26:18.708 "ddgst": false, 00:26:18.708 "dhchap_key": "key2", 00:26:18.708 "method": "bdev_nvme_attach_controller", 00:26:18.708 "req_id": 1 00:26:18.708 } 00:26:18.708 Got JSON-RPC error response 00:26:18.708 response: 00:26:18.708 { 00:26:18.708 "code": -5, 00:26:18.708 "message": "Input/output error" 00:26:18.708 } 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:18.708 18:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.709 18:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:18.709 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.709 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.709 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.709 18:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:18.709 18:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:18.709 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.709 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.709 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.709 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.709 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.709 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.709 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.709 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.709 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.709 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.709 18:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:18.709 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:18.709 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:18.709 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:18.709 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:18.709 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:18.709 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:18.709 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:18.709 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.709 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.967 request: 00:26:18.967 { 00:26:18.967 "name": "nvme0", 00:26:18.967 "trtype": "tcp", 00:26:18.967 "traddr": "10.0.0.1", 00:26:18.967 "adrfam": "ipv4", 00:26:18.967 "trsvcid": "4420", 00:26:18.967 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:18.967 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:18.967 "prchk_reftag": false, 00:26:18.967 "prchk_guard": false, 00:26:18.967 "hdgst": false, 00:26:18.967 "ddgst": false, 00:26:18.967 "dhchap_key": "key1", 00:26:18.967 "dhchap_ctrlr_key": "ckey2", 00:26:18.967 "method": "bdev_nvme_attach_controller", 00:26:18.967 "req_id": 1 00:26:18.967 } 00:26:18.967 Got JSON-RPC error response 00:26:18.967 response: 00:26:18.967 { 00:26:18.967 "code": -5, 00:26:18.967 "message": "Input/output error" 00:26:18.967 } 00:26:18.967 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:18.967 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:18.967 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:18.967 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:18.967 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:18.967 18:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:26:18.967 18:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:26:18.967 18:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:18.967 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:18.967 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:26:18.967 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:18.967 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:26:18.967 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:18.967 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:18.967 rmmod nvme_tcp 00:26:18.967 rmmod nvme_fabrics 00:26:18.967 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:18.967 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:26:18.967 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:26:18.967 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 4046764 ']' 00:26:18.967 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 4046764 00:26:18.967 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 4046764 ']' 00:26:18.967 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 4046764 00:26:18.967 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:26:18.967 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:18.967 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4046764 00:26:18.967 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:18.967 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:18.967 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4046764' 00:26:18.967 killing process with pid 4046764 00:26:18.967 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 4046764 00:26:18.967 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 4046764 00:26:19.226 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:19.226 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:19.226 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:19.226 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:19.226 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:19.226 18:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.226 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:19.226 18:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.127 18:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:21.127 18:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:21.127 18:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:21.127 18:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:21.127 18:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:21.127 18:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:26:21.127 18:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:21.127 18:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:21.127 18:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:21.127 18:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:21.127 18:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:21.127 18:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:21.386 18:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:23.921 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:23.921 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:23.921 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:23.921 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:23.921 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:24.180 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:24.180 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:24.180 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:24.180 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:24.180 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:24.180 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:24.180 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:24.180 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:24.180 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:24.180 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:24.180 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:25.557 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:25.817 18:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.eOW /tmp/spdk.key-null.xfs /tmp/spdk.key-sha256.xtf /tmp/spdk.key-sha384.o7b /tmp/spdk.key-sha512.Wqz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:25.817 18:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:28.353 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:28.353 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:28.353 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:28.353 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:28.353 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:28.353 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:28.353 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:28.353 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:28.353 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:28.353 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:28.353 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:28.353 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:28.353 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:28.353 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:28.353 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:28.353 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:28.353 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:28.612 00:26:28.612 real 0m50.633s 00:26:28.612 user 0m44.795s 00:26:28.612 sys 0m12.208s 00:26:28.612 18:41:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:28.612 18:41:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.612 ************************************ 00:26:28.612 END TEST nvmf_auth_host 00:26:28.612 ************************************ 00:26:28.612 18:41:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:28.612 18:41:13 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:26:28.612 18:41:13 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:28.612 18:41:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:28.612 18:41:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:28.612 18:41:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:28.612 ************************************ 00:26:28.612 START TEST nvmf_digest 00:26:28.612 ************************************ 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:28.612 * Looking for test storage... 00:26:28.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:28.612 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:28.613 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:28.613 18:41:14 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:28.613 18:41:14 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:28.613 18:41:14 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:28.613 18:41:14 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:28.613 18:41:14 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:28.613 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:28.613 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:28.613 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:28.613 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:28.613 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:28.613 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.613 18:41:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:28.613 18:41:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.613 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:28.613 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:28.613 18:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:26:28.613 18:41:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:35.183 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:35.183 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:35.183 Found net devices under 0000:86:00.0: cvl_0_0 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:35.183 Found net devices under 0000:86:00.1: cvl_0_1 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:35.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:35.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:26:35.183 00:26:35.183 --- 10.0.0.2 ping statistics --- 00:26:35.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:35.183 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:35.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:35.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:26:35.183 00:26:35.183 --- 10.0.0.1 ping statistics --- 00:26:35.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:35.183 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:35.183 18:41:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:35.184 18:41:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:35.184 18:41:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:35.184 ************************************ 00:26:35.184 START TEST nvmf_digest_clean 00:26:35.184 ************************************ 00:26:35.184 18:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:26:35.184 18:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:35.184 18:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:35.184 18:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:35.184 18:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:35.184 18:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:35.184 18:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:35.184 18:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:35.184 18:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:35.184 18:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=4060096 00:26:35.184 18:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 4060096 00:26:35.184 18:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:35.184 18:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 4060096 ']' 00:26:35.184 18:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:35.184 18:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:35.184 18:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:35.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:35.184 18:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:35.184 18:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:35.184 [2024-07-15 18:41:19.975688] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:26:35.184 [2024-07-15 18:41:19.975740] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:35.184 EAL: No free 2048 kB hugepages reported on node 1 00:26:35.184 [2024-07-15 18:41:20.047700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.184 [2024-07-15 18:41:20.133383] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:35.184 [2024-07-15 18:41:20.133421] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:35.184 [2024-07-15 18:41:20.133428] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:35.184 [2024-07-15 18:41:20.133434] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:35.184 [2024-07-15 18:41:20.133439] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:35.184 [2024-07-15 18:41:20.133461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.443 18:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:35.443 18:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:35.443 18:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:35.443 18:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:35.443 18:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:35.443 18:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:35.443 18:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:35.443 18:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:35.443 18:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:35.443 18:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.443 18:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:35.443 null0 00:26:35.443 [2024-07-15 18:41:20.906812] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:35.443 [2024-07-15 18:41:20.930992] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:35.443 18:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.443 18:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:35.443 18:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:35.443 18:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:35.443 18:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:35.443 18:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:35.443 18:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:35.443 18:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:35.443 18:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4060245 00:26:35.443 18:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4060245 /var/tmp/bperf.sock 00:26:35.443 18:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:35.443 18:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 4060245 ']' 00:26:35.443 18:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:35.443 18:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:35.443 18:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:35.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:35.443 18:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:35.443 18:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:35.443 [2024-07-15 18:41:20.981551] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:26:35.443 [2024-07-15 18:41:20.981590] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4060245 ] 00:26:35.701 EAL: No free 2048 kB hugepages reported on node 1 00:26:35.701 [2024-07-15 18:41:21.046282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.701 [2024-07-15 18:41:21.124220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.266 18:41:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:36.266 18:41:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:36.266 18:41:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:36.266 18:41:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:36.266 18:41:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:36.524 18:41:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:36.524 18:41:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:36.782 nvme0n1 00:26:36.782 18:41:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:36.782 18:41:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:37.040 Running I/O for 2 seconds... 00:26:38.939 00:26:38.939 Latency(us) 00:26:38.939 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.939 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:38.939 nvme0n1 : 2.04 25796.99 100.77 0.00 0.00 4859.20 2200.14 45188.63 00:26:38.939 =================================================================================================================== 00:26:38.939 Total : 25796.99 100.77 0.00 0.00 4859.20 2200.14 45188.63 00:26:38.939 0 00:26:38.939 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:38.939 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:38.939 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:38.939 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:38.939 | select(.opcode=="crc32c") 00:26:38.939 | "\(.module_name) \(.executed)"' 00:26:38.939 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:39.198 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:39.198 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:39.198 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:39.198 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:39.198 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4060245 00:26:39.198 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 4060245 ']' 00:26:39.198 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 4060245 00:26:39.198 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:39.198 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:39.198 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4060245 00:26:39.198 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:39.198 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:39.198 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4060245' 00:26:39.198 killing process with pid 4060245 00:26:39.198 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 4060245 00:26:39.198 Received shutdown signal, test time was about 2.000000 seconds 00:26:39.198 00:26:39.198 Latency(us) 00:26:39.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:39.198 =================================================================================================================== 00:26:39.198 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:39.198 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 4060245 00:26:39.457 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:39.457 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:39.457 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:39.457 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:39.457 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:39.457 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:39.457 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:39.457 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4060944 00:26:39.457 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4060944 /var/tmp/bperf.sock 00:26:39.457 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:39.457 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 4060944 ']' 00:26:39.457 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:39.457 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:39.457 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:39.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:39.457 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:39.457 18:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:39.457 [2024-07-15 18:41:24.902659] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:26:39.458 [2024-07-15 18:41:24.902705] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4060944 ] 00:26:39.458 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:39.458 Zero copy mechanism will not be used. 00:26:39.458 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.458 [2024-07-15 18:41:24.970986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.716 [2024-07-15 18:41:25.038907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:40.282 18:41:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:40.282 18:41:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:40.282 18:41:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:40.282 18:41:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:40.282 18:41:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:40.542 18:41:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:40.542 18:41:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:40.800 nvme0n1 00:26:40.800 18:41:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:40.800 18:41:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:40.800 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:40.800 Zero copy mechanism will not be used. 00:26:40.800 Running I/O for 2 seconds... 00:26:43.331 00:26:43.331 Latency(us) 00:26:43.331 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:43.331 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:43.332 nvme0n1 : 2.00 5622.06 702.76 0.00 0.00 2843.12 616.35 7521.04 00:26:43.332 =================================================================================================================== 00:26:43.332 Total : 5622.06 702.76 0.00 0.00 2843.12 616.35 7521.04 00:26:43.332 0 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:43.332 | select(.opcode=="crc32c") 00:26:43.332 | "\(.module_name) \(.executed)"' 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4060944 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 4060944 ']' 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 4060944 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4060944 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4060944' 00:26:43.332 killing process with pid 4060944 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 4060944 00:26:43.332 Received shutdown signal, test time was about 2.000000 seconds 00:26:43.332 00:26:43.332 Latency(us) 00:26:43.332 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:43.332 =================================================================================================================== 00:26:43.332 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 4060944 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4061627 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4061627 /var/tmp/bperf.sock 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 4061627 ']' 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:43.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:43.332 18:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:43.332 [2024-07-15 18:41:28.772605] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:26:43.332 [2024-07-15 18:41:28.772655] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4061627 ] 00:26:43.332 EAL: No free 2048 kB hugepages reported on node 1 00:26:43.332 [2024-07-15 18:41:28.840617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.590 [2024-07-15 18:41:28.919369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:44.219 18:41:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:44.219 18:41:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:44.219 18:41:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:44.219 18:41:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:44.219 18:41:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:44.478 18:41:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:44.478 18:41:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:44.736 nvme0n1 00:26:44.736 18:41:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:44.736 18:41:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:44.994 Running I/O for 2 seconds... 00:26:46.899 00:26:46.899 Latency(us) 00:26:46.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.899 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:46.899 nvme0n1 : 2.00 28053.53 109.58 0.00 0.00 4554.68 4306.65 12170.97 00:26:46.899 =================================================================================================================== 00:26:46.899 Total : 28053.53 109.58 0.00 0.00 4554.68 4306.65 12170.97 00:26:46.899 0 00:26:46.899 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:46.899 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:46.899 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:46.899 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:46.899 | select(.opcode=="crc32c") 00:26:46.899 | "\(.module_name) \(.executed)"' 00:26:46.899 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:47.159 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:47.159 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:47.159 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:47.159 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:47.159 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4061627 00:26:47.159 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 4061627 ']' 00:26:47.159 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 4061627 00:26:47.159 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:47.159 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:47.159 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4061627 00:26:47.159 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:47.159 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:47.159 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4061627' 00:26:47.159 killing process with pid 4061627 00:26:47.159 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 4061627 00:26:47.159 Received shutdown signal, test time was about 2.000000 seconds 00:26:47.159 00:26:47.159 Latency(us) 00:26:47.159 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:47.159 =================================================================================================================== 00:26:47.159 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:47.159 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 4061627 00:26:47.418 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:47.418 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:47.418 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:47.418 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:47.418 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:47.418 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:47.418 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:47.418 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4062175 00:26:47.418 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4062175 /var/tmp/bperf.sock 00:26:47.418 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:47.418 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 4062175 ']' 00:26:47.418 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:47.418 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:47.418 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:47.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:47.418 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:47.418 18:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:47.418 [2024-07-15 18:41:32.786227] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:26:47.418 [2024-07-15 18:41:32.786273] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4062175 ] 00:26:47.418 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:47.418 Zero copy mechanism will not be used. 00:26:47.418 EAL: No free 2048 kB hugepages reported on node 1 00:26:47.418 [2024-07-15 18:41:32.853028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.418 [2024-07-15 18:41:32.922700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.352 18:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:48.352 18:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:48.352 18:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:48.352 18:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:48.352 18:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:48.352 18:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:48.352 18:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:48.919 nvme0n1 00:26:48.919 18:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:48.919 18:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:48.919 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:48.919 Zero copy mechanism will not be used. 00:26:48.919 Running I/O for 2 seconds... 00:26:50.838 00:26:50.838 Latency(us) 00:26:50.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:50.838 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:50.838 nvme0n1 : 2.00 7068.93 883.62 0.00 0.00 2259.91 1646.20 10298.51 00:26:50.838 =================================================================================================================== 00:26:50.838 Total : 7068.93 883.62 0.00 0.00 2259.91 1646.20 10298.51 00:26:50.838 0 00:26:50.838 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:50.838 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:50.838 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:50.838 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:50.838 | select(.opcode=="crc32c") 00:26:50.838 | "\(.module_name) \(.executed)"' 00:26:50.838 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:51.096 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:51.096 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:51.096 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:51.096 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:51.096 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4062175 00:26:51.096 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 4062175 ']' 00:26:51.096 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 4062175 00:26:51.096 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:51.096 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:51.096 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4062175 00:26:51.096 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:51.096 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:51.096 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4062175' 00:26:51.096 killing process with pid 4062175 00:26:51.096 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 4062175 00:26:51.096 Received shutdown signal, test time was about 2.000000 seconds 00:26:51.096 00:26:51.096 Latency(us) 00:26:51.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:51.096 =================================================================================================================== 00:26:51.096 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:51.096 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 4062175 00:26:51.355 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 4060096 00:26:51.355 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 4060096 ']' 00:26:51.355 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 4060096 00:26:51.355 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:51.355 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:51.355 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4060096 00:26:51.355 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:51.355 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:51.355 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4060096' 00:26:51.355 killing process with pid 4060096 00:26:51.355 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 4060096 00:26:51.355 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 4060096 00:26:51.615 00:26:51.615 real 0m17.052s 00:26:51.615 user 0m32.388s 00:26:51.615 sys 0m4.783s 00:26:51.615 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:51.615 18:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:51.615 ************************************ 00:26:51.615 END TEST nvmf_digest_clean 00:26:51.615 ************************************ 00:26:51.615 18:41:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:26:51.615 18:41:37 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:51.615 18:41:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:51.615 18:41:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:51.615 18:41:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:51.615 ************************************ 00:26:51.615 START TEST nvmf_digest_error 00:26:51.615 ************************************ 00:26:51.615 18:41:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:26:51.615 18:41:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:51.615 18:41:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:51.615 18:41:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:51.615 18:41:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:51.615 18:41:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=4062891 00:26:51.615 18:41:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 4062891 00:26:51.615 18:41:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:51.615 18:41:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 4062891 ']' 00:26:51.615 18:41:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:51.615 18:41:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:51.615 18:41:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:51.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:51.615 18:41:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:51.615 18:41:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:51.615 [2024-07-15 18:41:37.093348] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:26:51.615 [2024-07-15 18:41:37.093392] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:51.615 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.615 [2024-07-15 18:41:37.161980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.874 [2024-07-15 18:41:37.240419] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:51.874 [2024-07-15 18:41:37.240457] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:51.874 [2024-07-15 18:41:37.240464] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:51.874 [2024-07-15 18:41:37.240471] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:51.874 [2024-07-15 18:41:37.240476] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:51.874 [2024-07-15 18:41:37.240511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.442 18:41:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:52.442 18:41:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:52.442 18:41:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:52.442 18:41:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:52.442 18:41:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:52.442 18:41:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:52.442 18:41:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:52.442 18:41:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.442 18:41:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:52.442 [2024-07-15 18:41:37.922486] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:52.442 18:41:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.442 18:41:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:52.442 18:41:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:52.442 18:41:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.442 18:41:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:52.701 null0 00:26:52.701 [2024-07-15 18:41:38.014098] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:52.701 [2024-07-15 18:41:38.038270] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:52.701 18:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.701 18:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:52.701 18:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:52.701 18:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:52.701 18:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:52.701 18:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:52.701 18:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4063092 00:26:52.701 18:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4063092 /var/tmp/bperf.sock 00:26:52.701 18:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:52.701 18:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 4063092 ']' 00:26:52.701 18:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:52.701 18:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:52.701 18:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:52.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:52.701 18:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:52.701 18:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:52.701 [2024-07-15 18:41:38.090566] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:26:52.702 [2024-07-15 18:41:38.090606] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4063092 ] 00:26:52.702 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.702 [2024-07-15 18:41:38.158658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.702 [2024-07-15 18:41:38.236530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.638 18:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:53.638 18:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:53.638 18:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:53.638 18:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:53.638 18:41:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:53.638 18:41:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.638 18:41:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:53.638 18:41:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.638 18:41:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:53.638 18:41:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:53.897 nvme0n1 00:26:53.897 18:41:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:53.897 18:41:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.897 18:41:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:53.897 18:41:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.897 18:41:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:53.897 18:41:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:53.897 Running I/O for 2 seconds... 00:26:53.897 [2024-07-15 18:41:39.421977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:53.897 [2024-07-15 18:41:39.422008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.897 [2024-07-15 18:41:39.422018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.897 [2024-07-15 18:41:39.432541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:53.897 [2024-07-15 18:41:39.432562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.897 [2024-07-15 18:41:39.432571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.898 [2024-07-15 18:41:39.443492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:53.898 [2024-07-15 18:41:39.443511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.898 [2024-07-15 18:41:39.443520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.898 [2024-07-15 18:41:39.451900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:53.898 [2024-07-15 18:41:39.451920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.898 [2024-07-15 18:41:39.451929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.157 [2024-07-15 18:41:39.463564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.157 [2024-07-15 18:41:39.463583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.157 [2024-07-15 18:41:39.463591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.157 [2024-07-15 18:41:39.472592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.157 [2024-07-15 18:41:39.472612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.157 [2024-07-15 18:41:39.472621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.157 [2024-07-15 18:41:39.481633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.157 [2024-07-15 18:41:39.481653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.157 [2024-07-15 18:41:39.481660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.157 [2024-07-15 18:41:39.490436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.157 [2024-07-15 18:41:39.490454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.157 [2024-07-15 18:41:39.490462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.157 [2024-07-15 18:41:39.499567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.157 [2024-07-15 18:41:39.499586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.157 [2024-07-15 18:41:39.499594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.157 [2024-07-15 18:41:39.508405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.157 [2024-07-15 18:41:39.508423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.157 [2024-07-15 18:41:39.508430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.157 [2024-07-15 18:41:39.517651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.157 [2024-07-15 18:41:39.517669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.157 [2024-07-15 18:41:39.517677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.157 [2024-07-15 18:41:39.526503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.157 [2024-07-15 18:41:39.526522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.157 [2024-07-15 18:41:39.526530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.157 [2024-07-15 18:41:39.535485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.157 [2024-07-15 18:41:39.535503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.157 [2024-07-15 18:41:39.535511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.157 [2024-07-15 18:41:39.544730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.157 [2024-07-15 18:41:39.544748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.157 [2024-07-15 18:41:39.544756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.157 [2024-07-15 18:41:39.554289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.157 [2024-07-15 18:41:39.554308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.157 [2024-07-15 18:41:39.554316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.157 [2024-07-15 18:41:39.563929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.157 [2024-07-15 18:41:39.563948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.157 [2024-07-15 18:41:39.563956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.157 [2024-07-15 18:41:39.571948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.157 [2024-07-15 18:41:39.571966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.157 [2024-07-15 18:41:39.571977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.157 [2024-07-15 18:41:39.580898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.157 [2024-07-15 18:41:39.580918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.157 [2024-07-15 18:41:39.580926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.157 [2024-07-15 18:41:39.593411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.157 [2024-07-15 18:41:39.593430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.157 [2024-07-15 18:41:39.593438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.157 [2024-07-15 18:41:39.601043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.158 [2024-07-15 18:41:39.601062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.158 [2024-07-15 18:41:39.601070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.158 [2024-07-15 18:41:39.612304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.158 [2024-07-15 18:41:39.612322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.158 [2024-07-15 18:41:39.612330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.158 [2024-07-15 18:41:39.623364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.158 [2024-07-15 18:41:39.623383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.158 [2024-07-15 18:41:39.623391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.158 [2024-07-15 18:41:39.635538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.158 [2024-07-15 18:41:39.635556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.158 [2024-07-15 18:41:39.635564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.158 [2024-07-15 18:41:39.644644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.158 [2024-07-15 18:41:39.644663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.158 [2024-07-15 18:41:39.644670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.158 [2024-07-15 18:41:39.655551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.158 [2024-07-15 18:41:39.655570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.158 [2024-07-15 18:41:39.655578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.158 [2024-07-15 18:41:39.663666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.158 [2024-07-15 18:41:39.663685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.158 [2024-07-15 18:41:39.663692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.158 [2024-07-15 18:41:39.675791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.158 [2024-07-15 18:41:39.675811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.158 [2024-07-15 18:41:39.675819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.158 [2024-07-15 18:41:39.687382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.158 [2024-07-15 18:41:39.687401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.158 [2024-07-15 18:41:39.687408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.158 [2024-07-15 18:41:39.698309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.158 [2024-07-15 18:41:39.698327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.158 [2024-07-15 18:41:39.698335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.158 [2024-07-15 18:41:39.706980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.158 [2024-07-15 18:41:39.706997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.158 [2024-07-15 18:41:39.707004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.418 [2024-07-15 18:41:39.718649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.418 [2024-07-15 18:41:39.718668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.418 [2024-07-15 18:41:39.718677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.418 [2024-07-15 18:41:39.728272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.418 [2024-07-15 18:41:39.728290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.418 [2024-07-15 18:41:39.728298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.418 [2024-07-15 18:41:39.736059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.418 [2024-07-15 18:41:39.736077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.418 [2024-07-15 18:41:39.736085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.418 [2024-07-15 18:41:39.748131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.418 [2024-07-15 18:41:39.748151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.418 [2024-07-15 18:41:39.748162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.418 [2024-07-15 18:41:39.755668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.418 [2024-07-15 18:41:39.755687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.418 [2024-07-15 18:41:39.755694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.418 [2024-07-15 18:41:39.767201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.418 [2024-07-15 18:41:39.767220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.418 [2024-07-15 18:41:39.767228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.418 [2024-07-15 18:41:39.775599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.418 [2024-07-15 18:41:39.775618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.418 [2024-07-15 18:41:39.775626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.418 [2024-07-15 18:41:39.786548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.418 [2024-07-15 18:41:39.786566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.418 [2024-07-15 18:41:39.786574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.418 [2024-07-15 18:41:39.797190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.418 [2024-07-15 18:41:39.797209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.418 [2024-07-15 18:41:39.797217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.418 [2024-07-15 18:41:39.804617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.418 [2024-07-15 18:41:39.804636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.418 [2024-07-15 18:41:39.804644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.418 [2024-07-15 18:41:39.813968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.419 [2024-07-15 18:41:39.813988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.419 [2024-07-15 18:41:39.813995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.419 [2024-07-15 18:41:39.823836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.419 [2024-07-15 18:41:39.823855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.419 [2024-07-15 18:41:39.823863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.419 [2024-07-15 18:41:39.832373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.419 [2024-07-15 18:41:39.832397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.419 [2024-07-15 18:41:39.832405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.419 [2024-07-15 18:41:39.841393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.419 [2024-07-15 18:41:39.841413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.419 [2024-07-15 18:41:39.841421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.419 [2024-07-15 18:41:39.851146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.419 [2024-07-15 18:41:39.851165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.419 [2024-07-15 18:41:39.851173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.419 [2024-07-15 18:41:39.859388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.419 [2024-07-15 18:41:39.859407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.419 [2024-07-15 18:41:39.859414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.419 [2024-07-15 18:41:39.869951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.419 [2024-07-15 18:41:39.869971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.419 [2024-07-15 18:41:39.869979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.419 [2024-07-15 18:41:39.878783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.419 [2024-07-15 18:41:39.878802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.419 [2024-07-15 18:41:39.878811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.419 [2024-07-15 18:41:39.887945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.419 [2024-07-15 18:41:39.887963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:62 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.419 [2024-07-15 18:41:39.887971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.419 [2024-07-15 18:41:39.897784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.419 [2024-07-15 18:41:39.897802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.419 [2024-07-15 18:41:39.897810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.419 [2024-07-15 18:41:39.906966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.419 [2024-07-15 18:41:39.906985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.419 [2024-07-15 18:41:39.906994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.419 [2024-07-15 18:41:39.915668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.419 [2024-07-15 18:41:39.915685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.419 [2024-07-15 18:41:39.915693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.419 [2024-07-15 18:41:39.923834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.419 [2024-07-15 18:41:39.923853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.419 [2024-07-15 18:41:39.923861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.419 [2024-07-15 18:41:39.936140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.419 [2024-07-15 18:41:39.936158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.419 [2024-07-15 18:41:39.936166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.419 [2024-07-15 18:41:39.948186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.419 [2024-07-15 18:41:39.948206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.419 [2024-07-15 18:41:39.948215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.419 [2024-07-15 18:41:39.956474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.419 [2024-07-15 18:41:39.956493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.419 [2024-07-15 18:41:39.956501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.419 [2024-07-15 18:41:39.966499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.419 [2024-07-15 18:41:39.966519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.419 [2024-07-15 18:41:39.966526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.679 [2024-07-15 18:41:39.974869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.679 [2024-07-15 18:41:39.974889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.679 [2024-07-15 18:41:39.974897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.679 [2024-07-15 18:41:39.986321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.679 [2024-07-15 18:41:39.986345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.679 [2024-07-15 18:41:39.986354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.679 [2024-07-15 18:41:39.996698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.679 [2024-07-15 18:41:39.996717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.679 [2024-07-15 18:41:39.996728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.679 [2024-07-15 18:41:40.005675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.679 [2024-07-15 18:41:40.005698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.679 [2024-07-15 18:41:40.005706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.679 [2024-07-15 18:41:40.018749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.679 [2024-07-15 18:41:40.018772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.679 [2024-07-15 18:41:40.018781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.679 [2024-07-15 18:41:40.030038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.679 [2024-07-15 18:41:40.030059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.679 [2024-07-15 18:41:40.030067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.679 [2024-07-15 18:41:40.038140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.679 [2024-07-15 18:41:40.038160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.679 [2024-07-15 18:41:40.038168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.679 [2024-07-15 18:41:40.048739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.679 [2024-07-15 18:41:40.048759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.679 [2024-07-15 18:41:40.048768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.679 [2024-07-15 18:41:40.058476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.679 [2024-07-15 18:41:40.058504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.679 [2024-07-15 18:41:40.058528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.679 [2024-07-15 18:41:40.070743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.679 [2024-07-15 18:41:40.070766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.679 [2024-07-15 18:41:40.070774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.679 [2024-07-15 18:41:40.082353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.679 [2024-07-15 18:41:40.082373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.679 [2024-07-15 18:41:40.082382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.679 [2024-07-15 18:41:40.091596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.679 [2024-07-15 18:41:40.091619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.679 [2024-07-15 18:41:40.091628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.679 [2024-07-15 18:41:40.100778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.679 [2024-07-15 18:41:40.100797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.679 [2024-07-15 18:41:40.100805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.679 [2024-07-15 18:41:40.110089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.679 [2024-07-15 18:41:40.110108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.679 [2024-07-15 18:41:40.110116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.679 [2024-07-15 18:41:40.118516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.679 [2024-07-15 18:41:40.118535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.679 [2024-07-15 18:41:40.118543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.679 [2024-07-15 18:41:40.128067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.679 [2024-07-15 18:41:40.128086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.679 [2024-07-15 18:41:40.128094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.679 [2024-07-15 18:41:40.138110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.679 [2024-07-15 18:41:40.138129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.679 [2024-07-15 18:41:40.138137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.679 [2024-07-15 18:41:40.146763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.679 [2024-07-15 18:41:40.146782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.679 [2024-07-15 18:41:40.146791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.679 [2024-07-15 18:41:40.157841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.679 [2024-07-15 18:41:40.157860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.679 [2024-07-15 18:41:40.157868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.679 [2024-07-15 18:41:40.165779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.679 [2024-07-15 18:41:40.165798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.679 [2024-07-15 18:41:40.165806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.679 [2024-07-15 18:41:40.176603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.679 [2024-07-15 18:41:40.176622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.679 [2024-07-15 18:41:40.176630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.679 [2024-07-15 18:41:40.186052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.679 [2024-07-15 18:41:40.186071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.679 [2024-07-15 18:41:40.186079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.679 [2024-07-15 18:41:40.197245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.679 [2024-07-15 18:41:40.197264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.679 [2024-07-15 18:41:40.197273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.679 [2024-07-15 18:41:40.205860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.679 [2024-07-15 18:41:40.205880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.679 [2024-07-15 18:41:40.205888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.679 [2024-07-15 18:41:40.218142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.679 [2024-07-15 18:41:40.218161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.679 [2024-07-15 18:41:40.218169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.679 [2024-07-15 18:41:40.229772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.680 [2024-07-15 18:41:40.229791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.680 [2024-07-15 18:41:40.229799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.939 [2024-07-15 18:41:40.240839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.939 [2024-07-15 18:41:40.240858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.939 [2024-07-15 18:41:40.240866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.939 [2024-07-15 18:41:40.249269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.939 [2024-07-15 18:41:40.249287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.939 [2024-07-15 18:41:40.249295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.939 [2024-07-15 18:41:40.259364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.939 [2024-07-15 18:41:40.259387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.939 [2024-07-15 18:41:40.259394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.939 [2024-07-15 18:41:40.271162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.939 [2024-07-15 18:41:40.271181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.939 [2024-07-15 18:41:40.271189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.939 [2024-07-15 18:41:40.279546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.939 [2024-07-15 18:41:40.279565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.939 [2024-07-15 18:41:40.279574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.939 [2024-07-15 18:41:40.290903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.939 [2024-07-15 18:41:40.290922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.939 [2024-07-15 18:41:40.290930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.939 [2024-07-15 18:41:40.298802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.939 [2024-07-15 18:41:40.298821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.939 [2024-07-15 18:41:40.298829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.939 [2024-07-15 18:41:40.310917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.939 [2024-07-15 18:41:40.310936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.939 [2024-07-15 18:41:40.310944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.939 [2024-07-15 18:41:40.322321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.939 [2024-07-15 18:41:40.322349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.939 [2024-07-15 18:41:40.322358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.939 [2024-07-15 18:41:40.331269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.939 [2024-07-15 18:41:40.331289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.939 [2024-07-15 18:41:40.331298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.939 [2024-07-15 18:41:40.343419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.939 [2024-07-15 18:41:40.343439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.939 [2024-07-15 18:41:40.343447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.939 [2024-07-15 18:41:40.354085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.939 [2024-07-15 18:41:40.354104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.939 [2024-07-15 18:41:40.354112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.939 [2024-07-15 18:41:40.364027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.939 [2024-07-15 18:41:40.364046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.939 [2024-07-15 18:41:40.364054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.939 [2024-07-15 18:41:40.373448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.939 [2024-07-15 18:41:40.373466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.939 [2024-07-15 18:41:40.373474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.939 [2024-07-15 18:41:40.382362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.939 [2024-07-15 18:41:40.382380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.939 [2024-07-15 18:41:40.382388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.939 [2024-07-15 18:41:40.393969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.939 [2024-07-15 18:41:40.393988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.939 [2024-07-15 18:41:40.393996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.939 [2024-07-15 18:41:40.402121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.939 [2024-07-15 18:41:40.402140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.939 [2024-07-15 18:41:40.402147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.939 [2024-07-15 18:41:40.412846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.939 [2024-07-15 18:41:40.412865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.939 [2024-07-15 18:41:40.412873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.939 [2024-07-15 18:41:40.424963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.939 [2024-07-15 18:41:40.424982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.939 [2024-07-15 18:41:40.424990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.939 [2024-07-15 18:41:40.437541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.939 [2024-07-15 18:41:40.437560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.939 [2024-07-15 18:41:40.437571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.939 [2024-07-15 18:41:40.445905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.939 [2024-07-15 18:41:40.445922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.939 [2024-07-15 18:41:40.445930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.939 [2024-07-15 18:41:40.456933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.939 [2024-07-15 18:41:40.456953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.939 [2024-07-15 18:41:40.456961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.939 [2024-07-15 18:41:40.466236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.939 [2024-07-15 18:41:40.466254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.939 [2024-07-15 18:41:40.466261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.939 [2024-07-15 18:41:40.475334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.939 [2024-07-15 18:41:40.475356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.939 [2024-07-15 18:41:40.475364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.939 [2024-07-15 18:41:40.483656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.939 [2024-07-15 18:41:40.483675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.939 [2024-07-15 18:41:40.483682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.939 [2024-07-15 18:41:40.493089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:54.939 [2024-07-15 18:41:40.493107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.939 [2024-07-15 18:41:40.493115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.198 [2024-07-15 18:41:40.502512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.198 [2024-07-15 18:41:40.502532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.198 [2024-07-15 18:41:40.502540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.198 [2024-07-15 18:41:40.512673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.198 [2024-07-15 18:41:40.512691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.198 [2024-07-15 18:41:40.512699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.198 [2024-07-15 18:41:40.524896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.198 [2024-07-15 18:41:40.524918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.198 [2024-07-15 18:41:40.524925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.198 [2024-07-15 18:41:40.532888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.198 [2024-07-15 18:41:40.532906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.198 [2024-07-15 18:41:40.532914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.198 [2024-07-15 18:41:40.543175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.198 [2024-07-15 18:41:40.543194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.198 [2024-07-15 18:41:40.543202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.198 [2024-07-15 18:41:40.552507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.198 [2024-07-15 18:41:40.552526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.198 [2024-07-15 18:41:40.552534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.198 [2024-07-15 18:41:40.562152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.198 [2024-07-15 18:41:40.562171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.198 [2024-07-15 18:41:40.562178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.198 [2024-07-15 18:41:40.571309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.198 [2024-07-15 18:41:40.571328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.198 [2024-07-15 18:41:40.571336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.198 [2024-07-15 18:41:40.581717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.198 [2024-07-15 18:41:40.581735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.198 [2024-07-15 18:41:40.581743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.198 [2024-07-15 18:41:40.589927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.198 [2024-07-15 18:41:40.589946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.198 [2024-07-15 18:41:40.589953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.198 [2024-07-15 18:41:40.602409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.198 [2024-07-15 18:41:40.602428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.198 [2024-07-15 18:41:40.602436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.198 [2024-07-15 18:41:40.613139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.198 [2024-07-15 18:41:40.613157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.198 [2024-07-15 18:41:40.613164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.198 [2024-07-15 18:41:40.621723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.198 [2024-07-15 18:41:40.621742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.198 [2024-07-15 18:41:40.621750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.198 [2024-07-15 18:41:40.632796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.198 [2024-07-15 18:41:40.632815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.198 [2024-07-15 18:41:40.632823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.198 [2024-07-15 18:41:40.643874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.198 [2024-07-15 18:41:40.643892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.198 [2024-07-15 18:41:40.643900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.198 [2024-07-15 18:41:40.652412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.198 [2024-07-15 18:41:40.652430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.198 [2024-07-15 18:41:40.652438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.198 [2024-07-15 18:41:40.664537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.198 [2024-07-15 18:41:40.664556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.198 [2024-07-15 18:41:40.664564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.198 [2024-07-15 18:41:40.672243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.198 [2024-07-15 18:41:40.672261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.198 [2024-07-15 18:41:40.672269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.198 [2024-07-15 18:41:40.683181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.198 [2024-07-15 18:41:40.683200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.198 [2024-07-15 18:41:40.683207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.198 [2024-07-15 18:41:40.695446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.198 [2024-07-15 18:41:40.695468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.198 [2024-07-15 18:41:40.695476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.198 [2024-07-15 18:41:40.707968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.198 [2024-07-15 18:41:40.707987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.198 [2024-07-15 18:41:40.707995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.198 [2024-07-15 18:41:40.720355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.198 [2024-07-15 18:41:40.720373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.198 [2024-07-15 18:41:40.720381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.198 [2024-07-15 18:41:40.728410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.198 [2024-07-15 18:41:40.728428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.198 [2024-07-15 18:41:40.728435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.199 [2024-07-15 18:41:40.740241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.199 [2024-07-15 18:41:40.740259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.199 [2024-07-15 18:41:40.740267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.199 [2024-07-15 18:41:40.752573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.199 [2024-07-15 18:41:40.752593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.199 [2024-07-15 18:41:40.752602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.456 [2024-07-15 18:41:40.761695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.456 [2024-07-15 18:41:40.761713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.456 [2024-07-15 18:41:40.761721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.456 [2024-07-15 18:41:40.769902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.456 [2024-07-15 18:41:40.769920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.457 [2024-07-15 18:41:40.769928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.457 [2024-07-15 18:41:40.780126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.457 [2024-07-15 18:41:40.780144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.457 [2024-07-15 18:41:40.780152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.457 [2024-07-15 18:41:40.787899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.457 [2024-07-15 18:41:40.787918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.457 [2024-07-15 18:41:40.787926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.457 [2024-07-15 18:41:40.798809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.457 [2024-07-15 18:41:40.798829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.457 [2024-07-15 18:41:40.798836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.457 [2024-07-15 18:41:40.809989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.457 [2024-07-15 18:41:40.810008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.457 [2024-07-15 18:41:40.810015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.457 [2024-07-15 18:41:40.817887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.457 [2024-07-15 18:41:40.817905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.457 [2024-07-15 18:41:40.817913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.457 [2024-07-15 18:41:40.829229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.457 [2024-07-15 18:41:40.829250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.457 [2024-07-15 18:41:40.829258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.457 [2024-07-15 18:41:40.841049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.457 [2024-07-15 18:41:40.841068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.457 [2024-07-15 18:41:40.841076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.457 [2024-07-15 18:41:40.848824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.457 [2024-07-15 18:41:40.848843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.457 [2024-07-15 18:41:40.848850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.457 [2024-07-15 18:41:40.860397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.457 [2024-07-15 18:41:40.860415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.457 [2024-07-15 18:41:40.860423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.457 [2024-07-15 18:41:40.872532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.457 [2024-07-15 18:41:40.872551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.457 [2024-07-15 18:41:40.872563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.457 [2024-07-15 18:41:40.884185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.457 [2024-07-15 18:41:40.884203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.457 [2024-07-15 18:41:40.884211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.457 [2024-07-15 18:41:40.892733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.457 [2024-07-15 18:41:40.892752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.457 [2024-07-15 18:41:40.892759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.457 [2024-07-15 18:41:40.904124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.457 [2024-07-15 18:41:40.904142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.457 [2024-07-15 18:41:40.904149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.457 [2024-07-15 18:41:40.912280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.457 [2024-07-15 18:41:40.912297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.457 [2024-07-15 18:41:40.912305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.457 [2024-07-15 18:41:40.923666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.457 [2024-07-15 18:41:40.923686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.457 [2024-07-15 18:41:40.923694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.457 [2024-07-15 18:41:40.935184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.457 [2024-07-15 18:41:40.935203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.457 [2024-07-15 18:41:40.935211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.457 [2024-07-15 18:41:40.943711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.457 [2024-07-15 18:41:40.943729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.457 [2024-07-15 18:41:40.943737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.457 [2024-07-15 18:41:40.955031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.457 [2024-07-15 18:41:40.955050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.457 [2024-07-15 18:41:40.955058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.457 [2024-07-15 18:41:40.967638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.457 [2024-07-15 18:41:40.967660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.457 [2024-07-15 18:41:40.967668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.457 [2024-07-15 18:41:40.976055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.457 [2024-07-15 18:41:40.976073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.457 [2024-07-15 18:41:40.976080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.457 [2024-07-15 18:41:40.988674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.457 [2024-07-15 18:41:40.988693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.457 [2024-07-15 18:41:40.988701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.457 [2024-07-15 18:41:41.000397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.457 [2024-07-15 18:41:41.000416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.457 [2024-07-15 18:41:41.000424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.457 [2024-07-15 18:41:41.012561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.457 [2024-07-15 18:41:41.012579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.457 [2024-07-15 18:41:41.012587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.716 [2024-07-15 18:41:41.020416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.716 [2024-07-15 18:41:41.020434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.716 [2024-07-15 18:41:41.020442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.716 [2024-07-15 18:41:41.031303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.716 [2024-07-15 18:41:41.031322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.716 [2024-07-15 18:41:41.031330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.716 [2024-07-15 18:41:41.043068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.716 [2024-07-15 18:41:41.043086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.716 [2024-07-15 18:41:41.043094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.716 [2024-07-15 18:41:41.055131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.716 [2024-07-15 18:41:41.055149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.716 [2024-07-15 18:41:41.055157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.716 [2024-07-15 18:41:41.067577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.716 [2024-07-15 18:41:41.067596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.716 [2024-07-15 18:41:41.067603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.716 [2024-07-15 18:41:41.079797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.716 [2024-07-15 18:41:41.079817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.716 [2024-07-15 18:41:41.079826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.716 [2024-07-15 18:41:41.087896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.716 [2024-07-15 18:41:41.087914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.716 [2024-07-15 18:41:41.087921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.716 [2024-07-15 18:41:41.099230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.716 [2024-07-15 18:41:41.099249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.716 [2024-07-15 18:41:41.099256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.716 [2024-07-15 18:41:41.110869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.716 [2024-07-15 18:41:41.110887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.716 [2024-07-15 18:41:41.110895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.716 [2024-07-15 18:41:41.120390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.716 [2024-07-15 18:41:41.120409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.716 [2024-07-15 18:41:41.120416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.716 [2024-07-15 18:41:41.128660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.716 [2024-07-15 18:41:41.128679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.716 [2024-07-15 18:41:41.128687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.716 [2024-07-15 18:41:41.140267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.716 [2024-07-15 18:41:41.140286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.716 [2024-07-15 18:41:41.140294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.716 [2024-07-15 18:41:41.151180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.716 [2024-07-15 18:41:41.151198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.716 [2024-07-15 18:41:41.151209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.716 [2024-07-15 18:41:41.160316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.716 [2024-07-15 18:41:41.160335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.716 [2024-07-15 18:41:41.160348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.716 [2024-07-15 18:41:41.171360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.716 [2024-07-15 18:41:41.171377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.716 [2024-07-15 18:41:41.171386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.716 [2024-07-15 18:41:41.181403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.716 [2024-07-15 18:41:41.181421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.716 [2024-07-15 18:41:41.181429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.716 [2024-07-15 18:41:41.190557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.716 [2024-07-15 18:41:41.190575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.716 [2024-07-15 18:41:41.190583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.716 [2024-07-15 18:41:41.199794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.716 [2024-07-15 18:41:41.199813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.716 [2024-07-15 18:41:41.199820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.716 [2024-07-15 18:41:41.208415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.716 [2024-07-15 18:41:41.208434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.716 [2024-07-15 18:41:41.208442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.716 [2024-07-15 18:41:41.218044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.716 [2024-07-15 18:41:41.218062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.716 [2024-07-15 18:41:41.218069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.716 [2024-07-15 18:41:41.226577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.716 [2024-07-15 18:41:41.226596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.716 [2024-07-15 18:41:41.226603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.716 [2024-07-15 18:41:41.238095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.716 [2024-07-15 18:41:41.238115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.716 [2024-07-15 18:41:41.238123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.716 [2024-07-15 18:41:41.247855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.716 [2024-07-15 18:41:41.247875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.716 [2024-07-15 18:41:41.247882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.716 [2024-07-15 18:41:41.255923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.716 [2024-07-15 18:41:41.255941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.716 [2024-07-15 18:41:41.255949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.716 [2024-07-15 18:41:41.266025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.716 [2024-07-15 18:41:41.266044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.716 [2024-07-15 18:41:41.266052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.975 [2024-07-15 18:41:41.274560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.975 [2024-07-15 18:41:41.274579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.975 [2024-07-15 18:41:41.274586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.975 [2024-07-15 18:41:41.285172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.975 [2024-07-15 18:41:41.285192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.975 [2024-07-15 18:41:41.285200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.975 [2024-07-15 18:41:41.295852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.975 [2024-07-15 18:41:41.295872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.975 [2024-07-15 18:41:41.295880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.975 [2024-07-15 18:41:41.304861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.975 [2024-07-15 18:41:41.304881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.975 [2024-07-15 18:41:41.304889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.975 [2024-07-15 18:41:41.315717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.975 [2024-07-15 18:41:41.315736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.975 [2024-07-15 18:41:41.315746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.975 [2024-07-15 18:41:41.328149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.975 [2024-07-15 18:41:41.328169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.975 [2024-07-15 18:41:41.328177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.975 [2024-07-15 18:41:41.339437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.975 [2024-07-15 18:41:41.339456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.975 [2024-07-15 18:41:41.339464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.975 [2024-07-15 18:41:41.349723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.975 [2024-07-15 18:41:41.349743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.975 [2024-07-15 18:41:41.349751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.975 [2024-07-15 18:41:41.361141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.975 [2024-07-15 18:41:41.361160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.975 [2024-07-15 18:41:41.361167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.975 [2024-07-15 18:41:41.369737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.975 [2024-07-15 18:41:41.369756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.975 [2024-07-15 18:41:41.369764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.975 [2024-07-15 18:41:41.380743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.975 [2024-07-15 18:41:41.380762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.975 [2024-07-15 18:41:41.380770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.975 [2024-07-15 18:41:41.392777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.975 [2024-07-15 18:41:41.392796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.975 [2024-07-15 18:41:41.392803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.975 [2024-07-15 18:41:41.400290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.975 [2024-07-15 18:41:41.400308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.975 [2024-07-15 18:41:41.400316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.975 [2024-07-15 18:41:41.411510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22c4f20) 00:26:55.975 [2024-07-15 18:41:41.411533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.975 [2024-07-15 18:41:41.411540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.975 00:26:55.975 Latency(us) 00:26:55.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.975 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:55.975 nvme0n1 : 2.00 25312.87 98.88 0.00 0.00 5052.25 2278.16 17226.61 00:26:55.975 =================================================================================================================== 00:26:55.975 Total : 25312.87 98.88 0.00 0.00 5052.25 2278.16 17226.61 00:26:55.975 0 00:26:55.975 18:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:55.975 18:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:55.975 18:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:55.975 | .driver_specific 00:26:55.975 | .nvme_error 00:26:55.975 | .status_code 00:26:55.975 | .command_transient_transport_error' 00:26:55.975 18:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:56.234 18:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 198 > 0 )) 00:26:56.234 18:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4063092 00:26:56.234 18:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 4063092 ']' 00:26:56.234 18:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 4063092 00:26:56.234 18:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:56.234 18:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:56.234 18:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4063092 00:26:56.234 18:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:56.234 18:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:56.234 18:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4063092' 00:26:56.234 killing process with pid 4063092 00:26:56.234 18:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 4063092 00:26:56.234 Received shutdown signal, test time was about 2.000000 seconds 00:26:56.234 00:26:56.234 Latency(us) 00:26:56.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.234 =================================================================================================================== 00:26:56.234 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:56.234 18:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 4063092 00:26:56.492 18:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:56.492 18:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:56.492 18:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:56.492 18:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:56.492 18:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:56.492 18:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4063787 00:26:56.492 18:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4063787 /var/tmp/bperf.sock 00:26:56.492 18:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:56.492 18:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 4063787 ']' 00:26:56.492 18:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:56.492 18:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:56.493 18:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:56.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:56.493 18:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:56.493 18:41:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:56.493 [2024-07-15 18:41:41.885545] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:26:56.493 [2024-07-15 18:41:41.885595] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4063787 ] 00:26:56.493 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:56.493 Zero copy mechanism will not be used. 00:26:56.493 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.493 [2024-07-15 18:41:41.952716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.493 [2024-07-15 18:41:42.031225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:57.427 18:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:57.427 18:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:57.427 18:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:57.427 18:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:57.427 18:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:57.427 18:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.427 18:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:57.427 18:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.427 18:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:57.427 18:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:57.685 nvme0n1 00:26:57.685 18:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:57.685 18:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.685 18:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:57.685 18:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.685 18:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:57.685 18:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:57.943 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:57.943 Zero copy mechanism will not be used. 00:26:57.943 Running I/O for 2 seconds... 00:26:57.943 [2024-07-15 18:41:43.317182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.943 [2024-07-15 18:41:43.317216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.943 [2024-07-15 18:41:43.317227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:57.943 [2024-07-15 18:41:43.322810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.943 [2024-07-15 18:41:43.322834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.943 [2024-07-15 18:41:43.322843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:57.943 [2024-07-15 18:41:43.328414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.943 [2024-07-15 18:41:43.328436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.943 [2024-07-15 18:41:43.328444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:57.943 [2024-07-15 18:41:43.333975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.943 [2024-07-15 18:41:43.333996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.943 [2024-07-15 18:41:43.334005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.943 [2024-07-15 18:41:43.339382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.943 [2024-07-15 18:41:43.339403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.943 [2024-07-15 18:41:43.339411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:57.943 [2024-07-15 18:41:43.344735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.943 [2024-07-15 18:41:43.344755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.943 [2024-07-15 18:41:43.344763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:57.943 [2024-07-15 18:41:43.350209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.943 [2024-07-15 18:41:43.350228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.943 [2024-07-15 18:41:43.350236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:57.943 [2024-07-15 18:41:43.356288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.943 [2024-07-15 18:41:43.356306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.943 [2024-07-15 18:41:43.356315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.943 [2024-07-15 18:41:43.360800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.943 [2024-07-15 18:41:43.360819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.943 [2024-07-15 18:41:43.360829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:57.943 [2024-07-15 18:41:43.366106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.943 [2024-07-15 18:41:43.366126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.943 [2024-07-15 18:41:43.366134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:57.943 [2024-07-15 18:41:43.371458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.943 [2024-07-15 18:41:43.371477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.943 [2024-07-15 18:41:43.371485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:57.943 [2024-07-15 18:41:43.376837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.943 [2024-07-15 18:41:43.376858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.943 [2024-07-15 18:41:43.376866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.943 [2024-07-15 18:41:43.382308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.943 [2024-07-15 18:41:43.382327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.943 [2024-07-15 18:41:43.382335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:57.943 [2024-07-15 18:41:43.387839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.943 [2024-07-15 18:41:43.387861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.943 [2024-07-15 18:41:43.387868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:57.943 [2024-07-15 18:41:43.392893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.943 [2024-07-15 18:41:43.392914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.943 [2024-07-15 18:41:43.392922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:57.943 [2024-07-15 18:41:43.397740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.943 [2024-07-15 18:41:43.397761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.943 [2024-07-15 18:41:43.397769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.944 [2024-07-15 18:41:43.402690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.944 [2024-07-15 18:41:43.402709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.944 [2024-07-15 18:41:43.402717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:57.944 [2024-07-15 18:41:43.407736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.944 [2024-07-15 18:41:43.407759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.944 [2024-07-15 18:41:43.407767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:57.944 [2024-07-15 18:41:43.412916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.944 [2024-07-15 18:41:43.412935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.944 [2024-07-15 18:41:43.412943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:57.944 [2024-07-15 18:41:43.418201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.944 [2024-07-15 18:41:43.418220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.944 [2024-07-15 18:41:43.418227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.944 [2024-07-15 18:41:43.423524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.944 [2024-07-15 18:41:43.423544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.944 [2024-07-15 18:41:43.423552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:57.944 [2024-07-15 18:41:43.428888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.944 [2024-07-15 18:41:43.428907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.944 [2024-07-15 18:41:43.428914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:57.944 [2024-07-15 18:41:43.434178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.944 [2024-07-15 18:41:43.434198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.944 [2024-07-15 18:41:43.434206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:57.944 [2024-07-15 18:41:43.439597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.944 [2024-07-15 18:41:43.439615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.944 [2024-07-15 18:41:43.439623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.944 [2024-07-15 18:41:43.445127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.944 [2024-07-15 18:41:43.445147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.944 [2024-07-15 18:41:43.445154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:57.944 [2024-07-15 18:41:43.450630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.944 [2024-07-15 18:41:43.450650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.944 [2024-07-15 18:41:43.450658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:57.944 [2024-07-15 18:41:43.456029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.944 [2024-07-15 18:41:43.456047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.944 [2024-07-15 18:41:43.456054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:57.944 [2024-07-15 18:41:43.461456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.944 [2024-07-15 18:41:43.461475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.944 [2024-07-15 18:41:43.461482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.944 [2024-07-15 18:41:43.466814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.944 [2024-07-15 18:41:43.466833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.944 [2024-07-15 18:41:43.466841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:57.944 [2024-07-15 18:41:43.472134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.944 [2024-07-15 18:41:43.472153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.944 [2024-07-15 18:41:43.472160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:57.944 [2024-07-15 18:41:43.477363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.944 [2024-07-15 18:41:43.477382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.944 [2024-07-15 18:41:43.477390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:57.944 [2024-07-15 18:41:43.482785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.944 [2024-07-15 18:41:43.482804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.944 [2024-07-15 18:41:43.482812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.944 [2024-07-15 18:41:43.488213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.944 [2024-07-15 18:41:43.488232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.944 [2024-07-15 18:41:43.488240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:57.944 [2024-07-15 18:41:43.493714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.944 [2024-07-15 18:41:43.493733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.944 [2024-07-15 18:41:43.493741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:57.944 [2024-07-15 18:41:43.499307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:57.944 [2024-07-15 18:41:43.499326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.944 [2024-07-15 18:41:43.499344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.203 [2024-07-15 18:41:43.504762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.203 [2024-07-15 18:41:43.504781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.203 [2024-07-15 18:41:43.504789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.203 [2024-07-15 18:41:43.510192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.203 [2024-07-15 18:41:43.510212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.203 [2024-07-15 18:41:43.510220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.203 [2024-07-15 18:41:43.515742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.203 [2024-07-15 18:41:43.515761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.203 [2024-07-15 18:41:43.515769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.203 [2024-07-15 18:41:43.521004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.203 [2024-07-15 18:41:43.521023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.203 [2024-07-15 18:41:43.521030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.203 [2024-07-15 18:41:43.526549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.203 [2024-07-15 18:41:43.526569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.203 [2024-07-15 18:41:43.526576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.203 [2024-07-15 18:41:43.532142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.203 [2024-07-15 18:41:43.532161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.203 [2024-07-15 18:41:43.532169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.203 [2024-07-15 18:41:43.537629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.203 [2024-07-15 18:41:43.537649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.204 [2024-07-15 18:41:43.537657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.204 [2024-07-15 18:41:43.543167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.204 [2024-07-15 18:41:43.543187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.204 [2024-07-15 18:41:43.543194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.204 [2024-07-15 18:41:43.548657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.204 [2024-07-15 18:41:43.548676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.204 [2024-07-15 18:41:43.548684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.204 [2024-07-15 18:41:43.554331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.204 [2024-07-15 18:41:43.554355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.204 [2024-07-15 18:41:43.554363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.204 [2024-07-15 18:41:43.559855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.204 [2024-07-15 18:41:43.559875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.204 [2024-07-15 18:41:43.559883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.204 [2024-07-15 18:41:43.565399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.204 [2024-07-15 18:41:43.565418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.204 [2024-07-15 18:41:43.565425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.204 [2024-07-15 18:41:43.570676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.204 [2024-07-15 18:41:43.570696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.204 [2024-07-15 18:41:43.570703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.204 [2024-07-15 18:41:43.576063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.204 [2024-07-15 18:41:43.576084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.204 [2024-07-15 18:41:43.576092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.204 [2024-07-15 18:41:43.581279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.204 [2024-07-15 18:41:43.581298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.204 [2024-07-15 18:41:43.581306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.204 [2024-07-15 18:41:43.586480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.204 [2024-07-15 18:41:43.586499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.204 [2024-07-15 18:41:43.586507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.204 [2024-07-15 18:41:43.591801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.204 [2024-07-15 18:41:43.591822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.204 [2024-07-15 18:41:43.591833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.204 [2024-07-15 18:41:43.597389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.204 [2024-07-15 18:41:43.597410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.204 [2024-07-15 18:41:43.597418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.204 [2024-07-15 18:41:43.603046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.204 [2024-07-15 18:41:43.603065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.204 [2024-07-15 18:41:43.603073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.204 [2024-07-15 18:41:43.608506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.204 [2024-07-15 18:41:43.608527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.204 [2024-07-15 18:41:43.608534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.204 [2024-07-15 18:41:43.614115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.204 [2024-07-15 18:41:43.614135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.204 [2024-07-15 18:41:43.614143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.204 [2024-07-15 18:41:43.619756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.204 [2024-07-15 18:41:43.619776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.204 [2024-07-15 18:41:43.619784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.204 [2024-07-15 18:41:43.625132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.204 [2024-07-15 18:41:43.625152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.204 [2024-07-15 18:41:43.625161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.204 [2024-07-15 18:41:43.630552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.204 [2024-07-15 18:41:43.630572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.204 [2024-07-15 18:41:43.630580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.204 [2024-07-15 18:41:43.635894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.204 [2024-07-15 18:41:43.635914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.204 [2024-07-15 18:41:43.635922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.204 [2024-07-15 18:41:43.641346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.204 [2024-07-15 18:41:43.641369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.204 [2024-07-15 18:41:43.641377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.204 [2024-07-15 18:41:43.646559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.204 [2024-07-15 18:41:43.646579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.204 [2024-07-15 18:41:43.646588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.204 [2024-07-15 18:41:43.651656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.204 [2024-07-15 18:41:43.651676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.204 [2024-07-15 18:41:43.651683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.204 [2024-07-15 18:41:43.656817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.204 [2024-07-15 18:41:43.656837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.204 [2024-07-15 18:41:43.656844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.204 [2024-07-15 18:41:43.662027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.204 [2024-07-15 18:41:43.662046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.204 [2024-07-15 18:41:43.662054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.204 [2024-07-15 18:41:43.667229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.204 [2024-07-15 18:41:43.667248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.204 [2024-07-15 18:41:43.667256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.204 [2024-07-15 18:41:43.672445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.204 [2024-07-15 18:41:43.672465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.204 [2024-07-15 18:41:43.672473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.204 [2024-07-15 18:41:43.677755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.204 [2024-07-15 18:41:43.677774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.204 [2024-07-15 18:41:43.677782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.204 [2024-07-15 18:41:43.682824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.204 [2024-07-15 18:41:43.682844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.204 [2024-07-15 18:41:43.682852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.204 [2024-07-15 18:41:43.688014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.204 [2024-07-15 18:41:43.688033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.204 [2024-07-15 18:41:43.688040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.204 [2024-07-15 18:41:43.693306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.204 [2024-07-15 18:41:43.693324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.205 [2024-07-15 18:41:43.693332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.205 [2024-07-15 18:41:43.698585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.205 [2024-07-15 18:41:43.698605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.205 [2024-07-15 18:41:43.698612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.205 [2024-07-15 18:41:43.703806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.205 [2024-07-15 18:41:43.703825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.205 [2024-07-15 18:41:43.703832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.205 [2024-07-15 18:41:43.709069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.205 [2024-07-15 18:41:43.709088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.205 [2024-07-15 18:41:43.709096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.205 [2024-07-15 18:41:43.714363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.205 [2024-07-15 18:41:43.714382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.205 [2024-07-15 18:41:43.714390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.205 [2024-07-15 18:41:43.719643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.205 [2024-07-15 18:41:43.719662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.205 [2024-07-15 18:41:43.719670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.205 [2024-07-15 18:41:43.724871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.205 [2024-07-15 18:41:43.724891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.205 [2024-07-15 18:41:43.724898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.205 [2024-07-15 18:41:43.730126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.205 [2024-07-15 18:41:43.730145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.205 [2024-07-15 18:41:43.730155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.205 [2024-07-15 18:41:43.735463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.205 [2024-07-15 18:41:43.735481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.205 [2024-07-15 18:41:43.735489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.205 [2024-07-15 18:41:43.740697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.205 [2024-07-15 18:41:43.740716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.205 [2024-07-15 18:41:43.740724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.205 [2024-07-15 18:41:43.745974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.205 [2024-07-15 18:41:43.745993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.205 [2024-07-15 18:41:43.746000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.205 [2024-07-15 18:41:43.751229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.205 [2024-07-15 18:41:43.751248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.205 [2024-07-15 18:41:43.751255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.205 [2024-07-15 18:41:43.756589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.205 [2024-07-15 18:41:43.756609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.205 [2024-07-15 18:41:43.756617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.464 [2024-07-15 18:41:43.762014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.464 [2024-07-15 18:41:43.762033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.464 [2024-07-15 18:41:43.762041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.464 [2024-07-15 18:41:43.767272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.464 [2024-07-15 18:41:43.767291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.464 [2024-07-15 18:41:43.767298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.464 [2024-07-15 18:41:43.772517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.464 [2024-07-15 18:41:43.772536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.464 [2024-07-15 18:41:43.772543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.464 [2024-07-15 18:41:43.777793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.464 [2024-07-15 18:41:43.777812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.464 [2024-07-15 18:41:43.777820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.464 [2024-07-15 18:41:43.783035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.464 [2024-07-15 18:41:43.783054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.464 [2024-07-15 18:41:43.783062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.464 [2024-07-15 18:41:43.788258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.464 [2024-07-15 18:41:43.788276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.464 [2024-07-15 18:41:43.788286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.464 [2024-07-15 18:41:43.793471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.464 [2024-07-15 18:41:43.793490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.464 [2024-07-15 18:41:43.793497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.464 [2024-07-15 18:41:43.798709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.464 [2024-07-15 18:41:43.798727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.464 [2024-07-15 18:41:43.798735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.465 [2024-07-15 18:41:43.803969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.465 [2024-07-15 18:41:43.803988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.465 [2024-07-15 18:41:43.803995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.465 [2024-07-15 18:41:43.809210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.465 [2024-07-15 18:41:43.809229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.465 [2024-07-15 18:41:43.809237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.465 [2024-07-15 18:41:43.814455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.465 [2024-07-15 18:41:43.814474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.465 [2024-07-15 18:41:43.814481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.465 [2024-07-15 18:41:43.819816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.465 [2024-07-15 18:41:43.819835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.465 [2024-07-15 18:41:43.819848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.465 [2024-07-15 18:41:43.825110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.465 [2024-07-15 18:41:43.825129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.465 [2024-07-15 18:41:43.825136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.465 [2024-07-15 18:41:43.830221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.465 [2024-07-15 18:41:43.830241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.465 [2024-07-15 18:41:43.830249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.465 [2024-07-15 18:41:43.835293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.465 [2024-07-15 18:41:43.835312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.465 [2024-07-15 18:41:43.835319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.465 [2024-07-15 18:41:43.840413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.465 [2024-07-15 18:41:43.840433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.465 [2024-07-15 18:41:43.840440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.465 [2024-07-15 18:41:43.845531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.465 [2024-07-15 18:41:43.845550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.465 [2024-07-15 18:41:43.845558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.465 [2024-07-15 18:41:43.850281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.465 [2024-07-15 18:41:43.850302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.465 [2024-07-15 18:41:43.850310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.465 [2024-07-15 18:41:43.855452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.465 [2024-07-15 18:41:43.855473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.465 [2024-07-15 18:41:43.855481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.465 [2024-07-15 18:41:43.860550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.465 [2024-07-15 18:41:43.860571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.465 [2024-07-15 18:41:43.860578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.465 [2024-07-15 18:41:43.865621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.465 [2024-07-15 18:41:43.865645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.465 [2024-07-15 18:41:43.865652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.465 [2024-07-15 18:41:43.870694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.465 [2024-07-15 18:41:43.870714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.465 [2024-07-15 18:41:43.870721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.465 [2024-07-15 18:41:43.875812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.465 [2024-07-15 18:41:43.875833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.465 [2024-07-15 18:41:43.875841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.465 [2024-07-15 18:41:43.880905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.465 [2024-07-15 18:41:43.880926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.465 [2024-07-15 18:41:43.880933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.465 [2024-07-15 18:41:43.886037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.465 [2024-07-15 18:41:43.886058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.465 [2024-07-15 18:41:43.886065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.465 [2024-07-15 18:41:43.891088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.465 [2024-07-15 18:41:43.891108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.465 [2024-07-15 18:41:43.891116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.465 [2024-07-15 18:41:43.896147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.465 [2024-07-15 18:41:43.896168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.465 [2024-07-15 18:41:43.896176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.465 [2024-07-15 18:41:43.901224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.465 [2024-07-15 18:41:43.901244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.465 [2024-07-15 18:41:43.901252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.465 [2024-07-15 18:41:43.906236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.465 [2024-07-15 18:41:43.906256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.465 [2024-07-15 18:41:43.906264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.465 [2024-07-15 18:41:43.911373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.465 [2024-07-15 18:41:43.911394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.465 [2024-07-15 18:41:43.911401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.465 [2024-07-15 18:41:43.916489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.465 [2024-07-15 18:41:43.916509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.465 [2024-07-15 18:41:43.916517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.465 [2024-07-15 18:41:43.921577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.465 [2024-07-15 18:41:43.921597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.465 [2024-07-15 18:41:43.921605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.465 [2024-07-15 18:41:43.926962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.465 [2024-07-15 18:41:43.926983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.465 [2024-07-15 18:41:43.926991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.465 [2024-07-15 18:41:43.932583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.465 [2024-07-15 18:41:43.932604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.465 [2024-07-15 18:41:43.932611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.465 [2024-07-15 18:41:43.937849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.465 [2024-07-15 18:41:43.937870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.465 [2024-07-15 18:41:43.937877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.465 [2024-07-15 18:41:43.943096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.465 [2024-07-15 18:41:43.943117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.465 [2024-07-15 18:41:43.943125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.465 [2024-07-15 18:41:43.948428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.465 [2024-07-15 18:41:43.948448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.465 [2024-07-15 18:41:43.948457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.466 [2024-07-15 18:41:43.953680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.466 [2024-07-15 18:41:43.953701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.466 [2024-07-15 18:41:43.953711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.466 [2024-07-15 18:41:43.958939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.466 [2024-07-15 18:41:43.958960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.466 [2024-07-15 18:41:43.958967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.466 [2024-07-15 18:41:43.964231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.466 [2024-07-15 18:41:43.964252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.466 [2024-07-15 18:41:43.964259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.466 [2024-07-15 18:41:43.969462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.466 [2024-07-15 18:41:43.969482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.466 [2024-07-15 18:41:43.969490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.466 [2024-07-15 18:41:43.974747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.466 [2024-07-15 18:41:43.974768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.466 [2024-07-15 18:41:43.974776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.466 [2024-07-15 18:41:43.980032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.466 [2024-07-15 18:41:43.980053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.466 [2024-07-15 18:41:43.980060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.466 [2024-07-15 18:41:43.985388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.466 [2024-07-15 18:41:43.985409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.466 [2024-07-15 18:41:43.985416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.466 [2024-07-15 18:41:43.990636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.466 [2024-07-15 18:41:43.990656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.466 [2024-07-15 18:41:43.990664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.466 [2024-07-15 18:41:43.995581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.466 [2024-07-15 18:41:43.995601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.466 [2024-07-15 18:41:43.995609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.466 [2024-07-15 18:41:44.000760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.466 [2024-07-15 18:41:44.000781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.466 [2024-07-15 18:41:44.000789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.466 [2024-07-15 18:41:44.005748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.466 [2024-07-15 18:41:44.005769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.466 [2024-07-15 18:41:44.005776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.466 [2024-07-15 18:41:44.010840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.466 [2024-07-15 18:41:44.010861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.466 [2024-07-15 18:41:44.010869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.466 [2024-07-15 18:41:44.015899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.466 [2024-07-15 18:41:44.015920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.466 [2024-07-15 18:41:44.015927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.725 [2024-07-15 18:41:44.020975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.725 [2024-07-15 18:41:44.020996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.725 [2024-07-15 18:41:44.021004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.725 [2024-07-15 18:41:44.026064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.725 [2024-07-15 18:41:44.026085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.725 [2024-07-15 18:41:44.026094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.725 [2024-07-15 18:41:44.031326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.725 [2024-07-15 18:41:44.031351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.725 [2024-07-15 18:41:44.031359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.725 [2024-07-15 18:41:44.036592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.725 [2024-07-15 18:41:44.036613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.725 [2024-07-15 18:41:44.036621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.725 [2024-07-15 18:41:44.041943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.725 [2024-07-15 18:41:44.041963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.725 [2024-07-15 18:41:44.041975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.726 [2024-07-15 18:41:44.047267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.726 [2024-07-15 18:41:44.047288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.726 [2024-07-15 18:41:44.047296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.726 [2024-07-15 18:41:44.052594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.726 [2024-07-15 18:41:44.052615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.726 [2024-07-15 18:41:44.052622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.726 [2024-07-15 18:41:44.057812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.726 [2024-07-15 18:41:44.057833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.726 [2024-07-15 18:41:44.057840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.726 [2024-07-15 18:41:44.063156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.726 [2024-07-15 18:41:44.063176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.726 [2024-07-15 18:41:44.063184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.726 [2024-07-15 18:41:44.068434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.726 [2024-07-15 18:41:44.068454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.726 [2024-07-15 18:41:44.068462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.726 [2024-07-15 18:41:44.073749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.726 [2024-07-15 18:41:44.073770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.726 [2024-07-15 18:41:44.073778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.726 [2024-07-15 18:41:44.079034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.726 [2024-07-15 18:41:44.079055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.726 [2024-07-15 18:41:44.079062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.726 [2024-07-15 18:41:44.084309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.726 [2024-07-15 18:41:44.084330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.726 [2024-07-15 18:41:44.084346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.726 [2024-07-15 18:41:44.089627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.726 [2024-07-15 18:41:44.089652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.726 [2024-07-15 18:41:44.089660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.726 [2024-07-15 18:41:44.094992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.726 [2024-07-15 18:41:44.095012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.726 [2024-07-15 18:41:44.095020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.726 [2024-07-15 18:41:44.100524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.726 [2024-07-15 18:41:44.100545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.726 [2024-07-15 18:41:44.100552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.726 [2024-07-15 18:41:44.106167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.726 [2024-07-15 18:41:44.106187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.726 [2024-07-15 18:41:44.106195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.726 [2024-07-15 18:41:44.111420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.726 [2024-07-15 18:41:44.111440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.726 [2024-07-15 18:41:44.111448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.726 [2024-07-15 18:41:44.116483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.726 [2024-07-15 18:41:44.116504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.726 [2024-07-15 18:41:44.116513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.726 [2024-07-15 18:41:44.121510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.726 [2024-07-15 18:41:44.121531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.726 [2024-07-15 18:41:44.121539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.726 [2024-07-15 18:41:44.126515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.726 [2024-07-15 18:41:44.126536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.726 [2024-07-15 18:41:44.126544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.726 [2024-07-15 18:41:44.131672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.726 [2024-07-15 18:41:44.131692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.726 [2024-07-15 18:41:44.131700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.726 [2024-07-15 18:41:44.136770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.726 [2024-07-15 18:41:44.136791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.726 [2024-07-15 18:41:44.136799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.726 [2024-07-15 18:41:44.141773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.726 [2024-07-15 18:41:44.141794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.726 [2024-07-15 18:41:44.141804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.726 [2024-07-15 18:41:44.146842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.726 [2024-07-15 18:41:44.146863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.726 [2024-07-15 18:41:44.146870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.726 [2024-07-15 18:41:44.151956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.726 [2024-07-15 18:41:44.151978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.726 [2024-07-15 18:41:44.151986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.726 [2024-07-15 18:41:44.156998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.726 [2024-07-15 18:41:44.157021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.726 [2024-07-15 18:41:44.157029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.726 [2024-07-15 18:41:44.162105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.726 [2024-07-15 18:41:44.162127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.726 [2024-07-15 18:41:44.162135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.726 [2024-07-15 18:41:44.167211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.726 [2024-07-15 18:41:44.167232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.726 [2024-07-15 18:41:44.167240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.726 [2024-07-15 18:41:44.172391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.726 [2024-07-15 18:41:44.172412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.726 [2024-07-15 18:41:44.172420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.726 [2024-07-15 18:41:44.177545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.726 [2024-07-15 18:41:44.177565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.726 [2024-07-15 18:41:44.177577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.726 [2024-07-15 18:41:44.182788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.726 [2024-07-15 18:41:44.182810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.726 [2024-07-15 18:41:44.182817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.727 [2024-07-15 18:41:44.188115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.727 [2024-07-15 18:41:44.188137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.727 [2024-07-15 18:41:44.188144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.727 [2024-07-15 18:41:44.193408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.727 [2024-07-15 18:41:44.193429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.727 [2024-07-15 18:41:44.193437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.727 [2024-07-15 18:41:44.198816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.727 [2024-07-15 18:41:44.198837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.727 [2024-07-15 18:41:44.198846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.727 [2024-07-15 18:41:44.204306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.727 [2024-07-15 18:41:44.204327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.727 [2024-07-15 18:41:44.204341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.727 [2024-07-15 18:41:44.209831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.727 [2024-07-15 18:41:44.209854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.727 [2024-07-15 18:41:44.209862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.727 [2024-07-15 18:41:44.215363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.727 [2024-07-15 18:41:44.215385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.727 [2024-07-15 18:41:44.215393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.727 [2024-07-15 18:41:44.220803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.727 [2024-07-15 18:41:44.220825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.727 [2024-07-15 18:41:44.220833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.727 [2024-07-15 18:41:44.226116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.727 [2024-07-15 18:41:44.226141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.727 [2024-07-15 18:41:44.226149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.727 [2024-07-15 18:41:44.231491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.727 [2024-07-15 18:41:44.231513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.727 [2024-07-15 18:41:44.231521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.727 [2024-07-15 18:41:44.236857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.727 [2024-07-15 18:41:44.236878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.727 [2024-07-15 18:41:44.236885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.727 [2024-07-15 18:41:44.242191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.727 [2024-07-15 18:41:44.242211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.727 [2024-07-15 18:41:44.242219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.727 [2024-07-15 18:41:44.247469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.727 [2024-07-15 18:41:44.247490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.727 [2024-07-15 18:41:44.247497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.727 [2024-07-15 18:41:44.252762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.727 [2024-07-15 18:41:44.252783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.727 [2024-07-15 18:41:44.252790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.727 [2024-07-15 18:41:44.258046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.727 [2024-07-15 18:41:44.258067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.727 [2024-07-15 18:41:44.258075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.727 [2024-07-15 18:41:44.263318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.727 [2024-07-15 18:41:44.263343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.727 [2024-07-15 18:41:44.263352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.727 [2024-07-15 18:41:44.268567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.727 [2024-07-15 18:41:44.268588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.727 [2024-07-15 18:41:44.268596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.727 [2024-07-15 18:41:44.273839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.727 [2024-07-15 18:41:44.273860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.727 [2024-07-15 18:41:44.273867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.727 [2024-07-15 18:41:44.279150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.727 [2024-07-15 18:41:44.279172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.727 [2024-07-15 18:41:44.279180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.987 [2024-07-15 18:41:44.284424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.988 [2024-07-15 18:41:44.284445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.988 [2024-07-15 18:41:44.284453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.988 [2024-07-15 18:41:44.289708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.988 [2024-07-15 18:41:44.289728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.988 [2024-07-15 18:41:44.289736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.988 [2024-07-15 18:41:44.294952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.988 [2024-07-15 18:41:44.294972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.988 [2024-07-15 18:41:44.294979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.988 [2024-07-15 18:41:44.300261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.988 [2024-07-15 18:41:44.300282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.988 [2024-07-15 18:41:44.300290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.988 [2024-07-15 18:41:44.305527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.988 [2024-07-15 18:41:44.305547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.988 [2024-07-15 18:41:44.305555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.988 [2024-07-15 18:41:44.310828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.988 [2024-07-15 18:41:44.310848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.988 [2024-07-15 18:41:44.310855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.988 [2024-07-15 18:41:44.316136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.988 [2024-07-15 18:41:44.316156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.988 [2024-07-15 18:41:44.316167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.988 [2024-07-15 18:41:44.321383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.988 [2024-07-15 18:41:44.321403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.988 [2024-07-15 18:41:44.321411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.988 [2024-07-15 18:41:44.326710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.988 [2024-07-15 18:41:44.326731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.988 [2024-07-15 18:41:44.326739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.988 [2024-07-15 18:41:44.332082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.988 [2024-07-15 18:41:44.332103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.988 [2024-07-15 18:41:44.332110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.988 [2024-07-15 18:41:44.337462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.988 [2024-07-15 18:41:44.337483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.988 [2024-07-15 18:41:44.337491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.988 [2024-07-15 18:41:44.342811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.988 [2024-07-15 18:41:44.342832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.988 [2024-07-15 18:41:44.342840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.988 [2024-07-15 18:41:44.348145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.988 [2024-07-15 18:41:44.348166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.988 [2024-07-15 18:41:44.348175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.988 [2024-07-15 18:41:44.353478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.988 [2024-07-15 18:41:44.353499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.988 [2024-07-15 18:41:44.353507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.988 [2024-07-15 18:41:44.358763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.988 [2024-07-15 18:41:44.358785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.988 [2024-07-15 18:41:44.358793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.988 [2024-07-15 18:41:44.364059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.988 [2024-07-15 18:41:44.364080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.988 [2024-07-15 18:41:44.364088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.988 [2024-07-15 18:41:44.369376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.988 [2024-07-15 18:41:44.369397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.988 [2024-07-15 18:41:44.369405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.988 [2024-07-15 18:41:44.374687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.988 [2024-07-15 18:41:44.374708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.988 [2024-07-15 18:41:44.374715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.988 [2024-07-15 18:41:44.379949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.988 [2024-07-15 18:41:44.379970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.988 [2024-07-15 18:41:44.379977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.988 [2024-07-15 18:41:44.385258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.988 [2024-07-15 18:41:44.385279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.988 [2024-07-15 18:41:44.385287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.988 [2024-07-15 18:41:44.390536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.988 [2024-07-15 18:41:44.390557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.988 [2024-07-15 18:41:44.390565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.988 [2024-07-15 18:41:44.395826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.988 [2024-07-15 18:41:44.395846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.988 [2024-07-15 18:41:44.395854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.988 [2024-07-15 18:41:44.401139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.988 [2024-07-15 18:41:44.401160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.988 [2024-07-15 18:41:44.401168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.988 [2024-07-15 18:41:44.406404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.988 [2024-07-15 18:41:44.406424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.988 [2024-07-15 18:41:44.406436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.988 [2024-07-15 18:41:44.411732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.988 [2024-07-15 18:41:44.411753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.988 [2024-07-15 18:41:44.411761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.988 [2024-07-15 18:41:44.417008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.988 [2024-07-15 18:41:44.417030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.988 [2024-07-15 18:41:44.417038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.988 [2024-07-15 18:41:44.422311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.988 [2024-07-15 18:41:44.422332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.988 [2024-07-15 18:41:44.422346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.988 [2024-07-15 18:41:44.427593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.988 [2024-07-15 18:41:44.427615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.988 [2024-07-15 18:41:44.427623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.988 [2024-07-15 18:41:44.432903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.989 [2024-07-15 18:41:44.432925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.989 [2024-07-15 18:41:44.432932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.989 [2024-07-15 18:41:44.438249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.989 [2024-07-15 18:41:44.438270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.989 [2024-07-15 18:41:44.438278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.989 [2024-07-15 18:41:44.443788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.989 [2024-07-15 18:41:44.443809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.989 [2024-07-15 18:41:44.443816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.989 [2024-07-15 18:41:44.449119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.989 [2024-07-15 18:41:44.449140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.989 [2024-07-15 18:41:44.449147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.989 [2024-07-15 18:41:44.454396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.989 [2024-07-15 18:41:44.454420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.989 [2024-07-15 18:41:44.454428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.989 [2024-07-15 18:41:44.459637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.989 [2024-07-15 18:41:44.459656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.989 [2024-07-15 18:41:44.459664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.989 [2024-07-15 18:41:44.464975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.989 [2024-07-15 18:41:44.464996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.989 [2024-07-15 18:41:44.465004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.989 [2024-07-15 18:41:44.470355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.989 [2024-07-15 18:41:44.470376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.989 [2024-07-15 18:41:44.470384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.989 [2024-07-15 18:41:44.475706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.989 [2024-07-15 18:41:44.475727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.989 [2024-07-15 18:41:44.475734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.989 [2024-07-15 18:41:44.481020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.989 [2024-07-15 18:41:44.481040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.989 [2024-07-15 18:41:44.481048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.989 [2024-07-15 18:41:44.486352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.989 [2024-07-15 18:41:44.486372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.989 [2024-07-15 18:41:44.486380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.989 [2024-07-15 18:41:44.491661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.989 [2024-07-15 18:41:44.491681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.989 [2024-07-15 18:41:44.491689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.989 [2024-07-15 18:41:44.497016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.989 [2024-07-15 18:41:44.497037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.989 [2024-07-15 18:41:44.497044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.989 [2024-07-15 18:41:44.502304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.989 [2024-07-15 18:41:44.502325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.989 [2024-07-15 18:41:44.502332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.989 [2024-07-15 18:41:44.507593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.989 [2024-07-15 18:41:44.507614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.989 [2024-07-15 18:41:44.507622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.989 [2024-07-15 18:41:44.512851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.989 [2024-07-15 18:41:44.512872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.989 [2024-07-15 18:41:44.512879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.989 [2024-07-15 18:41:44.518133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.989 [2024-07-15 18:41:44.518154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.989 [2024-07-15 18:41:44.518161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.989 [2024-07-15 18:41:44.523418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.989 [2024-07-15 18:41:44.523439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.989 [2024-07-15 18:41:44.523447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:58.989 [2024-07-15 18:41:44.528741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.989 [2024-07-15 18:41:44.528763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.989 [2024-07-15 18:41:44.528770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:58.989 [2024-07-15 18:41:44.534026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.989 [2024-07-15 18:41:44.534046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.989 [2024-07-15 18:41:44.534053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:58.989 [2024-07-15 18:41:44.539357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:58.989 [2024-07-15 18:41:44.539378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.989 [2024-07-15 18:41:44.539385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.249 [2024-07-15 18:41:44.544687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.249 [2024-07-15 18:41:44.544708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.249 [2024-07-15 18:41:44.544719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.249 [2024-07-15 18:41:44.550061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.249 [2024-07-15 18:41:44.550082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.249 [2024-07-15 18:41:44.550090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.249 [2024-07-15 18:41:44.555392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.249 [2024-07-15 18:41:44.555412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.249 [2024-07-15 18:41:44.555419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.249 [2024-07-15 18:41:44.560655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.249 [2024-07-15 18:41:44.560675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.249 [2024-07-15 18:41:44.560683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.249 [2024-07-15 18:41:44.565927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.249 [2024-07-15 18:41:44.565947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.249 [2024-07-15 18:41:44.565954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.249 [2024-07-15 18:41:44.571181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.249 [2024-07-15 18:41:44.571201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.249 [2024-07-15 18:41:44.571209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.249 [2024-07-15 18:41:44.576439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.249 [2024-07-15 18:41:44.576460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.249 [2024-07-15 18:41:44.576468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.249 [2024-07-15 18:41:44.581792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.249 [2024-07-15 18:41:44.581812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.249 [2024-07-15 18:41:44.581820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.249 [2024-07-15 18:41:44.587047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.249 [2024-07-15 18:41:44.587067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.249 [2024-07-15 18:41:44.587075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.249 [2024-07-15 18:41:44.592402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.249 [2024-07-15 18:41:44.592425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.249 [2024-07-15 18:41:44.592433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.249 [2024-07-15 18:41:44.597729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.249 [2024-07-15 18:41:44.597750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.249 [2024-07-15 18:41:44.597758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.249 [2024-07-15 18:41:44.603115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.249 [2024-07-15 18:41:44.603135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.249 [2024-07-15 18:41:44.603143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.249 [2024-07-15 18:41:44.608471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.249 [2024-07-15 18:41:44.608492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.249 [2024-07-15 18:41:44.608500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.249 [2024-07-15 18:41:44.613804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.249 [2024-07-15 18:41:44.613823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.249 [2024-07-15 18:41:44.613831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.249 [2024-07-15 18:41:44.619034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.249 [2024-07-15 18:41:44.619054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.249 [2024-07-15 18:41:44.619062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.249 [2024-07-15 18:41:44.624282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.249 [2024-07-15 18:41:44.624302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.249 [2024-07-15 18:41:44.624309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.249 [2024-07-15 18:41:44.629535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.249 [2024-07-15 18:41:44.629555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.249 [2024-07-15 18:41:44.629563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.249 [2024-07-15 18:41:44.634744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.249 [2024-07-15 18:41:44.634764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.249 [2024-07-15 18:41:44.634772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.249 [2024-07-15 18:41:44.640004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.249 [2024-07-15 18:41:44.640024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.249 [2024-07-15 18:41:44.640032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.249 [2024-07-15 18:41:44.645328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.249 [2024-07-15 18:41:44.645355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.249 [2024-07-15 18:41:44.645363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.249 [2024-07-15 18:41:44.650669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.249 [2024-07-15 18:41:44.650689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.249 [2024-07-15 18:41:44.650697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.249 [2024-07-15 18:41:44.655980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.250 [2024-07-15 18:41:44.656001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.250 [2024-07-15 18:41:44.656008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.250 [2024-07-15 18:41:44.661265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.250 [2024-07-15 18:41:44.661285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.250 [2024-07-15 18:41:44.661293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.250 [2024-07-15 18:41:44.666621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.250 [2024-07-15 18:41:44.666641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.250 [2024-07-15 18:41:44.666649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.250 [2024-07-15 18:41:44.671958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.250 [2024-07-15 18:41:44.671981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.250 [2024-07-15 18:41:44.671989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.250 [2024-07-15 18:41:44.677260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.250 [2024-07-15 18:41:44.677280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.250 [2024-07-15 18:41:44.677288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.250 [2024-07-15 18:41:44.682567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.250 [2024-07-15 18:41:44.682588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.250 [2024-07-15 18:41:44.682598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.250 [2024-07-15 18:41:44.687875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.250 [2024-07-15 18:41:44.687895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.250 [2024-07-15 18:41:44.687903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.250 [2024-07-15 18:41:44.693183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.250 [2024-07-15 18:41:44.693203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.250 [2024-07-15 18:41:44.693211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.250 [2024-07-15 18:41:44.698459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.250 [2024-07-15 18:41:44.698479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.250 [2024-07-15 18:41:44.698487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.250 [2024-07-15 18:41:44.704035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.250 [2024-07-15 18:41:44.704056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.250 [2024-07-15 18:41:44.704063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.250 [2024-07-15 18:41:44.709399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.250 [2024-07-15 18:41:44.709419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.250 [2024-07-15 18:41:44.709426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.250 [2024-07-15 18:41:44.714665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.250 [2024-07-15 18:41:44.714685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.250 [2024-07-15 18:41:44.714694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.250 [2024-07-15 18:41:44.719990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.250 [2024-07-15 18:41:44.720011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.250 [2024-07-15 18:41:44.720020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.250 [2024-07-15 18:41:44.725276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.250 [2024-07-15 18:41:44.725297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.250 [2024-07-15 18:41:44.725305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.250 [2024-07-15 18:41:44.730580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.250 [2024-07-15 18:41:44.730604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.250 [2024-07-15 18:41:44.730612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.250 [2024-07-15 18:41:44.735932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.250 [2024-07-15 18:41:44.735952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.250 [2024-07-15 18:41:44.735960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.250 [2024-07-15 18:41:44.741174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.250 [2024-07-15 18:41:44.741194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.250 [2024-07-15 18:41:44.741202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.250 [2024-07-15 18:41:44.746454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.250 [2024-07-15 18:41:44.746475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.250 [2024-07-15 18:41:44.746482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.250 [2024-07-15 18:41:44.751730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.250 [2024-07-15 18:41:44.751751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.250 [2024-07-15 18:41:44.751759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.250 [2024-07-15 18:41:44.757074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.250 [2024-07-15 18:41:44.757094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.250 [2024-07-15 18:41:44.757102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.250 [2024-07-15 18:41:44.762359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.250 [2024-07-15 18:41:44.762379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.250 [2024-07-15 18:41:44.762387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.250 [2024-07-15 18:41:44.767634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.250 [2024-07-15 18:41:44.767655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.250 [2024-07-15 18:41:44.767662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.250 [2024-07-15 18:41:44.772965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.250 [2024-07-15 18:41:44.772986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.250 [2024-07-15 18:41:44.772997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.250 [2024-07-15 18:41:44.778261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.250 [2024-07-15 18:41:44.778283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.250 [2024-07-15 18:41:44.778290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.250 [2024-07-15 18:41:44.783554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.250 [2024-07-15 18:41:44.783574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.250 [2024-07-15 18:41:44.783582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.250 [2024-07-15 18:41:44.788909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.250 [2024-07-15 18:41:44.788930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.250 [2024-07-15 18:41:44.788937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.250 [2024-07-15 18:41:44.794226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.250 [2024-07-15 18:41:44.794246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.250 [2024-07-15 18:41:44.794254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.250 [2024-07-15 18:41:44.800061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.250 [2024-07-15 18:41:44.800081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.250 [2024-07-15 18:41:44.800089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.510 [2024-07-15 18:41:44.806819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.510 [2024-07-15 18:41:44.806840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.510 [2024-07-15 18:41:44.806848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.510 [2024-07-15 18:41:44.813815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.510 [2024-07-15 18:41:44.813837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.510 [2024-07-15 18:41:44.813844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.510 [2024-07-15 18:41:44.820352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.510 [2024-07-15 18:41:44.820373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.510 [2024-07-15 18:41:44.820382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.510 [2024-07-15 18:41:44.827905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.510 [2024-07-15 18:41:44.827930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.510 [2024-07-15 18:41:44.827938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.510 [2024-07-15 18:41:44.834519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.510 [2024-07-15 18:41:44.834541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.510 [2024-07-15 18:41:44.834549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.510 [2024-07-15 18:41:44.841059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.510 [2024-07-15 18:41:44.841080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.510 [2024-07-15 18:41:44.841088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.510 [2024-07-15 18:41:44.847148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.510 [2024-07-15 18:41:44.847169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.510 [2024-07-15 18:41:44.847177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.510 [2024-07-15 18:41:44.852920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.510 [2024-07-15 18:41:44.852941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.510 [2024-07-15 18:41:44.852948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.510 [2024-07-15 18:41:44.858523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.510 [2024-07-15 18:41:44.858543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.510 [2024-07-15 18:41:44.858551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.510 [2024-07-15 18:41:44.864142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.510 [2024-07-15 18:41:44.864163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.510 [2024-07-15 18:41:44.864171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.510 [2024-07-15 18:41:44.869806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.510 [2024-07-15 18:41:44.869827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.510 [2024-07-15 18:41:44.869835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.510 [2024-07-15 18:41:44.872831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.510 [2024-07-15 18:41:44.872851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.510 [2024-07-15 18:41:44.872859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.510 [2024-07-15 18:41:44.878196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.511 [2024-07-15 18:41:44.878217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.511 [2024-07-15 18:41:44.878225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.511 [2024-07-15 18:41:44.883673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.511 [2024-07-15 18:41:44.883694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.511 [2024-07-15 18:41:44.883702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.511 [2024-07-15 18:41:44.888842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.511 [2024-07-15 18:41:44.888863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.511 [2024-07-15 18:41:44.888871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.511 [2024-07-15 18:41:44.894345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.511 [2024-07-15 18:41:44.894365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.511 [2024-07-15 18:41:44.894373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.511 [2024-07-15 18:41:44.899989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.511 [2024-07-15 18:41:44.900010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.511 [2024-07-15 18:41:44.900017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.511 [2024-07-15 18:41:44.905559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.511 [2024-07-15 18:41:44.905580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.511 [2024-07-15 18:41:44.905588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.511 [2024-07-15 18:41:44.911212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.511 [2024-07-15 18:41:44.911233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.511 [2024-07-15 18:41:44.911241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.511 [2024-07-15 18:41:44.916808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.511 [2024-07-15 18:41:44.916829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.511 [2024-07-15 18:41:44.916837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.511 [2024-07-15 18:41:44.922096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.511 [2024-07-15 18:41:44.922117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.511 [2024-07-15 18:41:44.922129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.511 [2024-07-15 18:41:44.927587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.511 [2024-07-15 18:41:44.927609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.511 [2024-07-15 18:41:44.927616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.511 [2024-07-15 18:41:44.933097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.511 [2024-07-15 18:41:44.933118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.511 [2024-07-15 18:41:44.933126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.511 [2024-07-15 18:41:44.938611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.511 [2024-07-15 18:41:44.938632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.511 [2024-07-15 18:41:44.938640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.511 [2024-07-15 18:41:44.944035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.511 [2024-07-15 18:41:44.944060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.511 [2024-07-15 18:41:44.944068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.511 [2024-07-15 18:41:44.949634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.511 [2024-07-15 18:41:44.949654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.511 [2024-07-15 18:41:44.949662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.511 [2024-07-15 18:41:44.955548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.511 [2024-07-15 18:41:44.955569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.511 [2024-07-15 18:41:44.955577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.511 [2024-07-15 18:41:44.961149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.511 [2024-07-15 18:41:44.961171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.511 [2024-07-15 18:41:44.961180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.511 [2024-07-15 18:41:44.966679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.511 [2024-07-15 18:41:44.966700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.511 [2024-07-15 18:41:44.966708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.511 [2024-07-15 18:41:44.972955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.511 [2024-07-15 18:41:44.972980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.511 [2024-07-15 18:41:44.972989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.511 [2024-07-15 18:41:44.979238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.511 [2024-07-15 18:41:44.979260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.511 [2024-07-15 18:41:44.979267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.511 [2024-07-15 18:41:44.985691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.511 [2024-07-15 18:41:44.985712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.511 [2024-07-15 18:41:44.985720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.511 [2024-07-15 18:41:44.991378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.511 [2024-07-15 18:41:44.991399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.511 [2024-07-15 18:41:44.991407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.511 [2024-07-15 18:41:44.996999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.511 [2024-07-15 18:41:44.997020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.511 [2024-07-15 18:41:44.997027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.511 [2024-07-15 18:41:45.002477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.511 [2024-07-15 18:41:45.002498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.511 [2024-07-15 18:41:45.002506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.511 [2024-07-15 18:41:45.008118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.511 [2024-07-15 18:41:45.008139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.511 [2024-07-15 18:41:45.008147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.511 [2024-07-15 18:41:45.013891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.511 [2024-07-15 18:41:45.013912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.511 [2024-07-15 18:41:45.013919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.511 [2024-07-15 18:41:45.019403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.511 [2024-07-15 18:41:45.019424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.511 [2024-07-15 18:41:45.019431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.511 [2024-07-15 18:41:45.025426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.511 [2024-07-15 18:41:45.025447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.511 [2024-07-15 18:41:45.025454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.511 [2024-07-15 18:41:45.030256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.511 [2024-07-15 18:41:45.030277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.511 [2024-07-15 18:41:45.030285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.511 [2024-07-15 18:41:45.035456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.511 [2024-07-15 18:41:45.035477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.511 [2024-07-15 18:41:45.035484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.512 [2024-07-15 18:41:45.040920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.512 [2024-07-15 18:41:45.040942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.512 [2024-07-15 18:41:45.040949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.512 [2024-07-15 18:41:45.046252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.512 [2024-07-15 18:41:45.046273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.512 [2024-07-15 18:41:45.046281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.512 [2024-07-15 18:41:45.051649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.512 [2024-07-15 18:41:45.051670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.512 [2024-07-15 18:41:45.051678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.512 [2024-07-15 18:41:45.057099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.512 [2024-07-15 18:41:45.057120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.512 [2024-07-15 18:41:45.057127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.512 [2024-07-15 18:41:45.062484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.512 [2024-07-15 18:41:45.062505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.512 [2024-07-15 18:41:45.062513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.771 [2024-07-15 18:41:45.068267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.771 [2024-07-15 18:41:45.068289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.771 [2024-07-15 18:41:45.068300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.771 [2024-07-15 18:41:45.074120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.771 [2024-07-15 18:41:45.074142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.771 [2024-07-15 18:41:45.074149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.771 [2024-07-15 18:41:45.079276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.771 [2024-07-15 18:41:45.079298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.771 [2024-07-15 18:41:45.079305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.771 [2024-07-15 18:41:45.084917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.771 [2024-07-15 18:41:45.084937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.771 [2024-07-15 18:41:45.084945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.771 [2024-07-15 18:41:45.090943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.771 [2024-07-15 18:41:45.090965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.771 [2024-07-15 18:41:45.090973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.771 [2024-07-15 18:41:45.096407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.771 [2024-07-15 18:41:45.096428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.771 [2024-07-15 18:41:45.096435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.771 [2024-07-15 18:41:45.102130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.771 [2024-07-15 18:41:45.102151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.771 [2024-07-15 18:41:45.102160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.771 [2024-07-15 18:41:45.107771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.771 [2024-07-15 18:41:45.107792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.771 [2024-07-15 18:41:45.107800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.771 [2024-07-15 18:41:45.113495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.771 [2024-07-15 18:41:45.113516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.771 [2024-07-15 18:41:45.113524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.771 [2024-07-15 18:41:45.118988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.771 [2024-07-15 18:41:45.119013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.771 [2024-07-15 18:41:45.119020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.771 [2024-07-15 18:41:45.124606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.772 [2024-07-15 18:41:45.124626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.772 [2024-07-15 18:41:45.124633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.772 [2024-07-15 18:41:45.130439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.772 [2024-07-15 18:41:45.130460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.772 [2024-07-15 18:41:45.130468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.772 [2024-07-15 18:41:45.136269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.772 [2024-07-15 18:41:45.136290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.772 [2024-07-15 18:41:45.136298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.772 [2024-07-15 18:41:45.142080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.772 [2024-07-15 18:41:45.142101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.772 [2024-07-15 18:41:45.142108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.772 [2024-07-15 18:41:45.147780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.772 [2024-07-15 18:41:45.147801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.772 [2024-07-15 18:41:45.147809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.772 [2024-07-15 18:41:45.153213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.772 [2024-07-15 18:41:45.153234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.772 [2024-07-15 18:41:45.153242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.772 [2024-07-15 18:41:45.158651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.772 [2024-07-15 18:41:45.158672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.772 [2024-07-15 18:41:45.158679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.772 [2024-07-15 18:41:45.164133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.772 [2024-07-15 18:41:45.164155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.772 [2024-07-15 18:41:45.164162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.772 [2024-07-15 18:41:45.169618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.772 [2024-07-15 18:41:45.169640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.772 [2024-07-15 18:41:45.169648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.772 [2024-07-15 18:41:45.175002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.772 [2024-07-15 18:41:45.175023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.772 [2024-07-15 18:41:45.175032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.772 [2024-07-15 18:41:45.180615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.772 [2024-07-15 18:41:45.180639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.772 [2024-07-15 18:41:45.180648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.772 [2024-07-15 18:41:45.185916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.772 [2024-07-15 18:41:45.185940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.772 [2024-07-15 18:41:45.185949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.772 [2024-07-15 18:41:45.191418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.772 [2024-07-15 18:41:45.191440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.772 [2024-07-15 18:41:45.191448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.772 [2024-07-15 18:41:45.196668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.772 [2024-07-15 18:41:45.196689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.772 [2024-07-15 18:41:45.196698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.772 [2024-07-15 18:41:45.201902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.772 [2024-07-15 18:41:45.201923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.772 [2024-07-15 18:41:45.201931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.772 [2024-07-15 18:41:45.207162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.772 [2024-07-15 18:41:45.207182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.772 [2024-07-15 18:41:45.207189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.772 [2024-07-15 18:41:45.212426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.772 [2024-07-15 18:41:45.212452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.772 [2024-07-15 18:41:45.212460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.772 [2024-07-15 18:41:45.217682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.772 [2024-07-15 18:41:45.217702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.772 [2024-07-15 18:41:45.217709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.772 [2024-07-15 18:41:45.220604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.772 [2024-07-15 18:41:45.220625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.772 [2024-07-15 18:41:45.220633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.772 [2024-07-15 18:41:45.226490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.772 [2024-07-15 18:41:45.226511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.772 [2024-07-15 18:41:45.226519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.772 [2024-07-15 18:41:45.231594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.772 [2024-07-15 18:41:45.231616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.772 [2024-07-15 18:41:45.231623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.772 [2024-07-15 18:41:45.236818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.772 [2024-07-15 18:41:45.236839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.772 [2024-07-15 18:41:45.236847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.772 [2024-07-15 18:41:45.242106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.772 [2024-07-15 18:41:45.242126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.772 [2024-07-15 18:41:45.242133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.773 [2024-07-15 18:41:45.247376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.773 [2024-07-15 18:41:45.247397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.773 [2024-07-15 18:41:45.247404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.773 [2024-07-15 18:41:45.252686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.773 [2024-07-15 18:41:45.252706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.773 [2024-07-15 18:41:45.252714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.773 [2024-07-15 18:41:45.258047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.773 [2024-07-15 18:41:45.258068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.773 [2024-07-15 18:41:45.258076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.773 [2024-07-15 18:41:45.263402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.773 [2024-07-15 18:41:45.263422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.773 [2024-07-15 18:41:45.263430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.773 [2024-07-15 18:41:45.268853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.773 [2024-07-15 18:41:45.268873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.773 [2024-07-15 18:41:45.268881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.773 [2024-07-15 18:41:45.274303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.773 [2024-07-15 18:41:45.274324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.773 [2024-07-15 18:41:45.274331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.773 [2024-07-15 18:41:45.279744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.773 [2024-07-15 18:41:45.279765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.773 [2024-07-15 18:41:45.279773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.773 [2024-07-15 18:41:45.285267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.773 [2024-07-15 18:41:45.285287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.773 [2024-07-15 18:41:45.285295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.773 [2024-07-15 18:41:45.290096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.773 [2024-07-15 18:41:45.290117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.773 [2024-07-15 18:41:45.290124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.773 [2024-07-15 18:41:45.295316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.773 [2024-07-15 18:41:45.295343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.773 [2024-07-15 18:41:45.295351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.773 [2024-07-15 18:41:45.300590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.773 [2024-07-15 18:41:45.300611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.773 [2024-07-15 18:41:45.300621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.773 [2024-07-15 18:41:45.305949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.773 [2024-07-15 18:41:45.305971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.773 [2024-07-15 18:41:45.305979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.773 [2024-07-15 18:41:45.311332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17080b0) 00:26:59.773 [2024-07-15 18:41:45.311358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.773 [2024-07-15 18:41:45.311366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.773 00:26:59.773 Latency(us) 00:26:59.773 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:59.773 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:59.773 nvme0n1 : 2.00 5791.61 723.95 0.00 0.00 2760.03 670.96 7333.79 00:26:59.773 =================================================================================================================== 00:26:59.773 Total : 5791.61 723.95 0.00 0.00 2760.03 670.96 7333.79 00:26:59.773 0 00:27:00.032 18:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:00.032 18:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:00.032 18:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:00.032 | .driver_specific 00:27:00.032 | .nvme_error 00:27:00.032 | .status_code 00:27:00.032 | .command_transient_transport_error' 00:27:00.032 18:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:00.032 18:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 373 > 0 )) 00:27:00.032 18:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4063787 00:27:00.032 18:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 4063787 ']' 00:27:00.032 18:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 4063787 00:27:00.032 18:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:00.032 18:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:00.032 18:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4063787 00:27:00.032 18:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:00.032 18:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:00.032 18:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4063787' 00:27:00.032 killing process with pid 4063787 00:27:00.032 18:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 4063787 00:27:00.032 Received shutdown signal, test time was about 2.000000 seconds 00:27:00.032 00:27:00.032 Latency(us) 00:27:00.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.032 =================================================================================================================== 00:27:00.032 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:00.032 18:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 4063787 00:27:00.291 18:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:00.291 18:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:00.291 18:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:00.291 18:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:00.291 18:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:00.291 18:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4064367 00:27:00.291 18:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4064367 /var/tmp/bperf.sock 00:27:00.291 18:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:00.291 18:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 4064367 ']' 00:27:00.291 18:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:00.291 18:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:00.291 18:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:00.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:00.291 18:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:00.291 18:41:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:00.291 [2024-07-15 18:41:45.800204] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:27:00.291 [2024-07-15 18:41:45.800252] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4064367 ] 00:27:00.291 EAL: No free 2048 kB hugepages reported on node 1 00:27:00.550 [2024-07-15 18:41:45.866318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.550 [2024-07-15 18:41:45.934661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.117 18:41:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:01.117 18:41:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:27:01.117 18:41:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:01.117 18:41:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:01.376 18:41:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:01.376 18:41:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.376 18:41:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:01.376 18:41:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.376 18:41:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:01.376 18:41:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:01.635 nvme0n1 00:27:01.635 18:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:01.635 18:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.635 18:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:01.635 18:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.635 18:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:01.635 18:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:01.635 Running I/O for 2 seconds... 00:27:01.635 [2024-07-15 18:41:47.150133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.635 [2024-07-15 18:41:47.150315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.635 [2024-07-15 18:41:47.150350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:01.635 [2024-07-15 18:41:47.159542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.635 [2024-07-15 18:41:47.159698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.635 [2024-07-15 18:41:47.159719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:01.635 [2024-07-15 18:41:47.168837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.635 [2024-07-15 18:41:47.169003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.635 [2024-07-15 18:41:47.169022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:01.635 [2024-07-15 18:41:47.178083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.635 [2024-07-15 18:41:47.178227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.635 [2024-07-15 18:41:47.178245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:01.635 [2024-07-15 18:41:47.187293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.635 [2024-07-15 18:41:47.187458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.635 [2024-07-15 18:41:47.187476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:01.894 [2024-07-15 18:41:47.196764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.894 [2024-07-15 18:41:47.196910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.894 [2024-07-15 18:41:47.196927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:01.894 [2024-07-15 18:41:47.206145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.894 [2024-07-15 18:41:47.206291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.894 [2024-07-15 18:41:47.206311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:01.894 [2024-07-15 18:41:47.215484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.894 [2024-07-15 18:41:47.215656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.894 [2024-07-15 18:41:47.215677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:01.894 [2024-07-15 18:41:47.224726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.894 [2024-07-15 18:41:47.224870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.894 [2024-07-15 18:41:47.224886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:01.894 [2024-07-15 18:41:47.233945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.894 [2024-07-15 18:41:47.234108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.894 [2024-07-15 18:41:47.234126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:01.894 [2024-07-15 18:41:47.243207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.894 [2024-07-15 18:41:47.243351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.894 [2024-07-15 18:41:47.243369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:01.894 [2024-07-15 18:41:47.252528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.894 [2024-07-15 18:41:47.252734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.894 [2024-07-15 18:41:47.252753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:01.894 [2024-07-15 18:41:47.261959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.894 [2024-07-15 18:41:47.262105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.894 [2024-07-15 18:41:47.262122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:01.894 [2024-07-15 18:41:47.271132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.895 [2024-07-15 18:41:47.271275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.895 [2024-07-15 18:41:47.271292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:01.895 [2024-07-15 18:41:47.280308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.895 [2024-07-15 18:41:47.280460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.895 [2024-07-15 18:41:47.280478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:01.895 [2024-07-15 18:41:47.289535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.895 [2024-07-15 18:41:47.289698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.895 [2024-07-15 18:41:47.289715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:01.895 [2024-07-15 18:41:47.298783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.895 [2024-07-15 18:41:47.298931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.895 [2024-07-15 18:41:47.298947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:01.895 [2024-07-15 18:41:47.307974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.895 [2024-07-15 18:41:47.308117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.895 [2024-07-15 18:41:47.308134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:01.895 [2024-07-15 18:41:47.317155] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.895 [2024-07-15 18:41:47.317297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.895 [2024-07-15 18:41:47.317314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:01.895 [2024-07-15 18:41:47.326389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.895 [2024-07-15 18:41:47.326533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.895 [2024-07-15 18:41:47.326550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:01.895 [2024-07-15 18:41:47.335620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.895 [2024-07-15 18:41:47.335762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.895 [2024-07-15 18:41:47.335779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:01.895 [2024-07-15 18:41:47.344858] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.895 [2024-07-15 18:41:47.345018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.895 [2024-07-15 18:41:47.345036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:01.895 [2024-07-15 18:41:47.354096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.895 [2024-07-15 18:41:47.354239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.895 [2024-07-15 18:41:47.354256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:01.895 [2024-07-15 18:41:47.363276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.895 [2024-07-15 18:41:47.363427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.895 [2024-07-15 18:41:47.363443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:01.895 [2024-07-15 18:41:47.372506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.895 [2024-07-15 18:41:47.372649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.895 [2024-07-15 18:41:47.372665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:01.895 [2024-07-15 18:41:47.381737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.895 [2024-07-15 18:41:47.381897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.895 [2024-07-15 18:41:47.381915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:01.895 [2024-07-15 18:41:47.390935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.895 [2024-07-15 18:41:47.391079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.895 [2024-07-15 18:41:47.391096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:01.895 [2024-07-15 18:41:47.400125] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.895 [2024-07-15 18:41:47.400272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.895 [2024-07-15 18:41:47.400288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:01.895 [2024-07-15 18:41:47.409576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.895 [2024-07-15 18:41:47.409723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.895 [2024-07-15 18:41:47.409740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:01.895 [2024-07-15 18:41:47.418982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.895 [2024-07-15 18:41:47.419144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.895 [2024-07-15 18:41:47.419161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:01.895 [2024-07-15 18:41:47.428343] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.895 [2024-07-15 18:41:47.428506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.895 [2024-07-15 18:41:47.428523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:01.895 [2024-07-15 18:41:47.437611] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.895 [2024-07-15 18:41:47.437773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.895 [2024-07-15 18:41:47.437790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:01.895 [2024-07-15 18:41:47.446856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:01.895 [2024-07-15 18:41:47.447001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.895 [2024-07-15 18:41:47.447017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.155 [2024-07-15 18:41:47.456279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.155 [2024-07-15 18:41:47.456432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.155 [2024-07-15 18:41:47.456449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.155 [2024-07-15 18:41:47.465545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.155 [2024-07-15 18:41:47.465708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.155 [2024-07-15 18:41:47.465726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.155 [2024-07-15 18:41:47.474781] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.155 [2024-07-15 18:41:47.474924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.155 [2024-07-15 18:41:47.474957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.155 [2024-07-15 18:41:47.484021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.155 [2024-07-15 18:41:47.484180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.155 [2024-07-15 18:41:47.484197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.155 [2024-07-15 18:41:47.493215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.155 [2024-07-15 18:41:47.493384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.155 [2024-07-15 18:41:47.493401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.155 [2024-07-15 18:41:47.502434] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.155 [2024-07-15 18:41:47.502581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.155 [2024-07-15 18:41:47.502597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.155 [2024-07-15 18:41:47.511592] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.155 [2024-07-15 18:41:47.511734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.155 [2024-07-15 18:41:47.511751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.155 [2024-07-15 18:41:47.520802] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.155 [2024-07-15 18:41:47.520964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.155 [2024-07-15 18:41:47.520980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.155 [2024-07-15 18:41:47.530014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.155 [2024-07-15 18:41:47.530156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.155 [2024-07-15 18:41:47.530172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.155 [2024-07-15 18:41:47.539181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.155 [2024-07-15 18:41:47.539326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.155 [2024-07-15 18:41:47.539350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.155 [2024-07-15 18:41:47.548385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.155 [2024-07-15 18:41:47.548548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.155 [2024-07-15 18:41:47.548565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.155 [2024-07-15 18:41:47.557618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.155 [2024-07-15 18:41:47.557759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.155 [2024-07-15 18:41:47.557775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.155 [2024-07-15 18:41:47.566780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.155 [2024-07-15 18:41:47.566923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.155 [2024-07-15 18:41:47.566940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.155 [2024-07-15 18:41:47.575932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.155 [2024-07-15 18:41:47.576075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.155 [2024-07-15 18:41:47.576091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.155 [2024-07-15 18:41:47.585141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.155 [2024-07-15 18:41:47.585305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.155 [2024-07-15 18:41:47.585323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.155 [2024-07-15 18:41:47.594325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.155 [2024-07-15 18:41:47.594475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.155 [2024-07-15 18:41:47.594492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.155 [2024-07-15 18:41:47.603580] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.155 [2024-07-15 18:41:47.603725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.155 [2024-07-15 18:41:47.603741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.155 [2024-07-15 18:41:47.612763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.155 [2024-07-15 18:41:47.612925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.155 [2024-07-15 18:41:47.612942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.155 [2024-07-15 18:41:47.622021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.155 [2024-07-15 18:41:47.622170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.155 [2024-07-15 18:41:47.622186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.155 [2024-07-15 18:41:47.631185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.155 [2024-07-15 18:41:47.631327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.155 [2024-07-15 18:41:47.631348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.155 [2024-07-15 18:41:47.640402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.155 [2024-07-15 18:41:47.640565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.155 [2024-07-15 18:41:47.640582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.155 [2024-07-15 18:41:47.649607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.155 [2024-07-15 18:41:47.649750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.155 [2024-07-15 18:41:47.649766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.155 [2024-07-15 18:41:47.658782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.155 [2024-07-15 18:41:47.658942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.155 [2024-07-15 18:41:47.658958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.155 [2024-07-15 18:41:47.668200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.155 [2024-07-15 18:41:47.668362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.155 [2024-07-15 18:41:47.668379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.155 [2024-07-15 18:41:47.677479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.155 [2024-07-15 18:41:47.677641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.155 [2024-07-15 18:41:47.677658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.156 [2024-07-15 18:41:47.686680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.156 [2024-07-15 18:41:47.686822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.156 [2024-07-15 18:41:47.686839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.156 [2024-07-15 18:41:47.695853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.156 [2024-07-15 18:41:47.695996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.156 [2024-07-15 18:41:47.696012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.156 [2024-07-15 18:41:47.705014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.156 [2024-07-15 18:41:47.705156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.156 [2024-07-15 18:41:47.705172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.415 [2024-07-15 18:41:47.714521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.415 [2024-07-15 18:41:47.714667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.415 [2024-07-15 18:41:47.714684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.415 [2024-07-15 18:41:47.723791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.415 [2024-07-15 18:41:47.723935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.415 [2024-07-15 18:41:47.723951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.415 [2024-07-15 18:41:47.733127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.415 [2024-07-15 18:41:47.733270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.415 [2024-07-15 18:41:47.733287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.415 [2024-07-15 18:41:47.742369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.415 [2024-07-15 18:41:47.742530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.415 [2024-07-15 18:41:47.742547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.415 [2024-07-15 18:41:47.751554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.415 [2024-07-15 18:41:47.751698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.415 [2024-07-15 18:41:47.751714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.415 [2024-07-15 18:41:47.760930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.415 [2024-07-15 18:41:47.761073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.415 [2024-07-15 18:41:47.761091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.415 [2024-07-15 18:41:47.770140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.415 [2024-07-15 18:41:47.770301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.415 [2024-07-15 18:41:47.770318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.415 [2024-07-15 18:41:47.779326] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.415 [2024-07-15 18:41:47.779477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.415 [2024-07-15 18:41:47.779494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.415 [2024-07-15 18:41:47.788502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.416 [2024-07-15 18:41:47.788646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.416 [2024-07-15 18:41:47.788663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.416 [2024-07-15 18:41:47.797696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.416 [2024-07-15 18:41:47.797858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.416 [2024-07-15 18:41:47.797876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.416 [2024-07-15 18:41:47.806911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.416 [2024-07-15 18:41:47.807054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.416 [2024-07-15 18:41:47.807070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.416 [2024-07-15 18:41:47.816071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.416 [2024-07-15 18:41:47.816213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.416 [2024-07-15 18:41:47.816230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.416 [2024-07-15 18:41:47.825260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.416 [2024-07-15 18:41:47.825426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.416 [2024-07-15 18:41:47.825443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.416 [2024-07-15 18:41:47.834497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.416 [2024-07-15 18:41:47.834667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.416 [2024-07-15 18:41:47.834684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.416 [2024-07-15 18:41:47.843683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.416 [2024-07-15 18:41:47.843824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.416 [2024-07-15 18:41:47.843841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.416 [2024-07-15 18:41:47.852835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.416 [2024-07-15 18:41:47.852978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.416 [2024-07-15 18:41:47.852994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.416 [2024-07-15 18:41:47.862051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.416 [2024-07-15 18:41:47.862210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.416 [2024-07-15 18:41:47.862230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.416 [2024-07-15 18:41:47.871237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.416 [2024-07-15 18:41:47.871388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.416 [2024-07-15 18:41:47.871404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.416 [2024-07-15 18:41:47.880411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.416 [2024-07-15 18:41:47.880556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.416 [2024-07-15 18:41:47.880572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.416 [2024-07-15 18:41:47.889594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.416 [2024-07-15 18:41:47.889754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.416 [2024-07-15 18:41:47.889771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.416 [2024-07-15 18:41:47.898823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.416 [2024-07-15 18:41:47.898967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.416 [2024-07-15 18:41:47.898983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.416 [2024-07-15 18:41:47.907970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.416 [2024-07-15 18:41:47.908114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.416 [2024-07-15 18:41:47.908131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.416 [2024-07-15 18:41:47.917153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.416 [2024-07-15 18:41:47.917310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.416 [2024-07-15 18:41:47.917327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.416 [2024-07-15 18:41:47.926529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.416 [2024-07-15 18:41:47.926674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.416 [2024-07-15 18:41:47.926691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.416 [2024-07-15 18:41:47.935853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.416 [2024-07-15 18:41:47.936012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.416 [2024-07-15 18:41:47.936029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.416 [2024-07-15 18:41:47.945186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.416 [2024-07-15 18:41:47.945333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.416 [2024-07-15 18:41:47.945354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.416 [2024-07-15 18:41:47.954428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.416 [2024-07-15 18:41:47.954589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.416 [2024-07-15 18:41:47.954617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.416 [2024-07-15 18:41:47.963642] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.416 [2024-07-15 18:41:47.963788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.416 [2024-07-15 18:41:47.963805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.676 [2024-07-15 18:41:47.973028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.676 [2024-07-15 18:41:47.973174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.676 [2024-07-15 18:41:47.973190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.676 [2024-07-15 18:41:47.982343] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.676 [2024-07-15 18:41:47.982506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.676 [2024-07-15 18:41:47.982523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.676 [2024-07-15 18:41:47.991559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.676 [2024-07-15 18:41:47.991703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.676 [2024-07-15 18:41:47.991719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.676 [2024-07-15 18:41:48.000764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.676 [2024-07-15 18:41:48.000908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.676 [2024-07-15 18:41:48.000924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.676 [2024-07-15 18:41:48.009922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.676 [2024-07-15 18:41:48.010063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.676 [2024-07-15 18:41:48.010079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.676 [2024-07-15 18:41:48.019155] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.676 [2024-07-15 18:41:48.019315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.676 [2024-07-15 18:41:48.019332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.676 [2024-07-15 18:41:48.028345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.676 [2024-07-15 18:41:48.028491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.676 [2024-07-15 18:41:48.028507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.676 [2024-07-15 18:41:48.037537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.676 [2024-07-15 18:41:48.037682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.676 [2024-07-15 18:41:48.037698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.676 [2024-07-15 18:41:48.046718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.676 [2024-07-15 18:41:48.046881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.676 [2024-07-15 18:41:48.046898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.676 [2024-07-15 18:41:48.055942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.676 [2024-07-15 18:41:48.056085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.676 [2024-07-15 18:41:48.056101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.676 [2024-07-15 18:41:48.065069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.676 [2024-07-15 18:41:48.065209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.676 [2024-07-15 18:41:48.065225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.676 [2024-07-15 18:41:48.074283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.676 [2024-07-15 18:41:48.074449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.676 [2024-07-15 18:41:48.074466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.676 [2024-07-15 18:41:48.083516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.676 [2024-07-15 18:41:48.083685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.676 [2024-07-15 18:41:48.083701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.676 [2024-07-15 18:41:48.092699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.676 [2024-07-15 18:41:48.092843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.676 [2024-07-15 18:41:48.092859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.676 [2024-07-15 18:41:48.101855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.676 [2024-07-15 18:41:48.101997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.676 [2024-07-15 18:41:48.102016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.676 [2024-07-15 18:41:48.111084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.676 [2024-07-15 18:41:48.111247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.676 [2024-07-15 18:41:48.111264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.676 [2024-07-15 18:41:48.120458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.676 [2024-07-15 18:41:48.120618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.676 [2024-07-15 18:41:48.120645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.676 [2024-07-15 18:41:48.129639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.676 [2024-07-15 18:41:48.129782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.676 [2024-07-15 18:41:48.129799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.676 [2024-07-15 18:41:48.138841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.676 [2024-07-15 18:41:48.139001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.676 [2024-07-15 18:41:48.139018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.676 [2024-07-15 18:41:48.147972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.676 [2024-07-15 18:41:48.148115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.676 [2024-07-15 18:41:48.148132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.676 [2024-07-15 18:41:48.157119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.676 [2024-07-15 18:41:48.157262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.676 [2024-07-15 18:41:48.157279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.676 [2024-07-15 18:41:48.166395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.676 [2024-07-15 18:41:48.166560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.676 [2024-07-15 18:41:48.166576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.677 [2024-07-15 18:41:48.175624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.677 [2024-07-15 18:41:48.175785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.677 [2024-07-15 18:41:48.175801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.677 [2024-07-15 18:41:48.185013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.677 [2024-07-15 18:41:48.185176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.677 [2024-07-15 18:41:48.185199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.677 [2024-07-15 18:41:48.194470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.677 [2024-07-15 18:41:48.194618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.677 [2024-07-15 18:41:48.194638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.677 [2024-07-15 18:41:48.203883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.677 [2024-07-15 18:41:48.204029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.677 [2024-07-15 18:41:48.204046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.677 [2024-07-15 18:41:48.213205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.677 [2024-07-15 18:41:48.213372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.677 [2024-07-15 18:41:48.213391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.677 [2024-07-15 18:41:48.222545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.677 [2024-07-15 18:41:48.222692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.677 [2024-07-15 18:41:48.222710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.677 [2024-07-15 18:41:48.231965] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.677 [2024-07-15 18:41:48.232111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.677 [2024-07-15 18:41:48.232128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.936 [2024-07-15 18:41:48.241320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.936 [2024-07-15 18:41:48.241490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.936 [2024-07-15 18:41:48.241508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.936 [2024-07-15 18:41:48.250551] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.936 [2024-07-15 18:41:48.250694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.936 [2024-07-15 18:41:48.250710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.936 [2024-07-15 18:41:48.259718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.936 [2024-07-15 18:41:48.259861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.936 [2024-07-15 18:41:48.259878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.936 [2024-07-15 18:41:48.268927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.936 [2024-07-15 18:41:48.269093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.936 [2024-07-15 18:41:48.269110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.936 [2024-07-15 18:41:48.278185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.936 [2024-07-15 18:41:48.278327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.936 [2024-07-15 18:41:48.278348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.936 [2024-07-15 18:41:48.287347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.936 [2024-07-15 18:41:48.287491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.936 [2024-07-15 18:41:48.287507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.936 [2024-07-15 18:41:48.296549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.936 [2024-07-15 18:41:48.296711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.936 [2024-07-15 18:41:48.296727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.936 [2024-07-15 18:41:48.305766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.936 [2024-07-15 18:41:48.305908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.936 [2024-07-15 18:41:48.305925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.936 [2024-07-15 18:41:48.314959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.936 [2024-07-15 18:41:48.315102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.936 [2024-07-15 18:41:48.315118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.936 [2024-07-15 18:41:48.324149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.936 [2024-07-15 18:41:48.324293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.936 [2024-07-15 18:41:48.324310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.936 [2024-07-15 18:41:48.333366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.936 [2024-07-15 18:41:48.333527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.936 [2024-07-15 18:41:48.333545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.936 [2024-07-15 18:41:48.342574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.936 [2024-07-15 18:41:48.342719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.936 [2024-07-15 18:41:48.342736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.936 [2024-07-15 18:41:48.351724] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.936 [2024-07-15 18:41:48.351866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.936 [2024-07-15 18:41:48.351883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.936 [2024-07-15 18:41:48.360933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.936 [2024-07-15 18:41:48.361095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.936 [2024-07-15 18:41:48.361111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.936 [2024-07-15 18:41:48.370126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.936 [2024-07-15 18:41:48.370266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.937 [2024-07-15 18:41:48.370282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.937 [2024-07-15 18:41:48.379279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.937 [2024-07-15 18:41:48.379428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.937 [2024-07-15 18:41:48.379445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.937 [2024-07-15 18:41:48.388501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.937 [2024-07-15 18:41:48.388662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.937 [2024-07-15 18:41:48.388679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.937 [2024-07-15 18:41:48.397735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.937 [2024-07-15 18:41:48.397877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.937 [2024-07-15 18:41:48.397894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.937 [2024-07-15 18:41:48.406890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.937 [2024-07-15 18:41:48.407053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.937 [2024-07-15 18:41:48.407071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.937 [2024-07-15 18:41:48.416133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.937 [2024-07-15 18:41:48.416295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.937 [2024-07-15 18:41:48.416311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.937 [2024-07-15 18:41:48.425399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.937 [2024-07-15 18:41:48.425545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.937 [2024-07-15 18:41:48.425565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.937 [2024-07-15 18:41:48.434656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.937 [2024-07-15 18:41:48.434814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.937 [2024-07-15 18:41:48.434833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.937 [2024-07-15 18:41:48.444078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.937 [2024-07-15 18:41:48.444240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.937 [2024-07-15 18:41:48.444256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.937 [2024-07-15 18:41:48.453472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.937 [2024-07-15 18:41:48.453634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.937 [2024-07-15 18:41:48.453651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.937 [2024-07-15 18:41:48.462785] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.937 [2024-07-15 18:41:48.462949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.937 [2024-07-15 18:41:48.462965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.937 [2024-07-15 18:41:48.472015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.937 [2024-07-15 18:41:48.472159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.937 [2024-07-15 18:41:48.472176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.937 [2024-07-15 18:41:48.481481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.937 [2024-07-15 18:41:48.481630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.937 [2024-07-15 18:41:48.481648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:02.937 [2024-07-15 18:41:48.490884] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:02.937 [2024-07-15 18:41:48.491032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.937 [2024-07-15 18:41:48.491048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.197 [2024-07-15 18:41:48.500287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.197 [2024-07-15 18:41:48.500461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.197 [2024-07-15 18:41:48.500477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.197 [2024-07-15 18:41:48.509553] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.197 [2024-07-15 18:41:48.509716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.197 [2024-07-15 18:41:48.509735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.197 [2024-07-15 18:41:48.518791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.197 [2024-07-15 18:41:48.518933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.197 [2024-07-15 18:41:48.518949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.197 [2024-07-15 18:41:48.527966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.197 [2024-07-15 18:41:48.528108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.197 [2024-07-15 18:41:48.528125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.197 [2024-07-15 18:41:48.537145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.197 [2024-07-15 18:41:48.537304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.197 [2024-07-15 18:41:48.537321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.197 [2024-07-15 18:41:48.546355] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.197 [2024-07-15 18:41:48.546517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.197 [2024-07-15 18:41:48.546534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.197 [2024-07-15 18:41:48.555598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.197 [2024-07-15 18:41:48.555758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.197 [2024-07-15 18:41:48.555775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.197 [2024-07-15 18:41:48.564823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.197 [2024-07-15 18:41:48.564966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.197 [2024-07-15 18:41:48.564982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.197 [2024-07-15 18:41:48.574000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.197 [2024-07-15 18:41:48.574141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.198 [2024-07-15 18:41:48.574158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.198 [2024-07-15 18:41:48.583164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.198 [2024-07-15 18:41:48.583305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.198 [2024-07-15 18:41:48.583322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.198 [2024-07-15 18:41:48.592392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.198 [2024-07-15 18:41:48.592541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.198 [2024-07-15 18:41:48.592558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.198 [2024-07-15 18:41:48.601608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.198 [2024-07-15 18:41:48.601750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.198 [2024-07-15 18:41:48.601767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.198 [2024-07-15 18:41:48.610775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.198 [2024-07-15 18:41:48.610935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.198 [2024-07-15 18:41:48.610952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.198 [2024-07-15 18:41:48.619999] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.198 [2024-07-15 18:41:48.620143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.198 [2024-07-15 18:41:48.620160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.198 [2024-07-15 18:41:48.629201] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.198 [2024-07-15 18:41:48.629350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.198 [2024-07-15 18:41:48.629369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.198 [2024-07-15 18:41:48.638408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.198 [2024-07-15 18:41:48.638551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.198 [2024-07-15 18:41:48.638568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.198 [2024-07-15 18:41:48.647610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.198 [2024-07-15 18:41:48.647754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.198 [2024-07-15 18:41:48.647770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.198 [2024-07-15 18:41:48.656776] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.198 [2024-07-15 18:41:48.656922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.198 [2024-07-15 18:41:48.656939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.198 [2024-07-15 18:41:48.665970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.198 [2024-07-15 18:41:48.666133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.198 [2024-07-15 18:41:48.666149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.198 [2024-07-15 18:41:48.675209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.198 [2024-07-15 18:41:48.675354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.198 [2024-07-15 18:41:48.675371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.198 [2024-07-15 18:41:48.684439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.198 [2024-07-15 18:41:48.684602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.198 [2024-07-15 18:41:48.684619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.198 [2024-07-15 18:41:48.693760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.198 [2024-07-15 18:41:48.693908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.198 [2024-07-15 18:41:48.693924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.198 [2024-07-15 18:41:48.703163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.198 [2024-07-15 18:41:48.703309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.198 [2024-07-15 18:41:48.703326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.198 [2024-07-15 18:41:48.712541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.198 [2024-07-15 18:41:48.712686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.198 [2024-07-15 18:41:48.712703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.198 [2024-07-15 18:41:48.721822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.198 [2024-07-15 18:41:48.721964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.198 [2024-07-15 18:41:48.721981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.198 [2024-07-15 18:41:48.731051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.198 [2024-07-15 18:41:48.731211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.198 [2024-07-15 18:41:48.731227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.198 [2024-07-15 18:41:48.740358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.198 [2024-07-15 18:41:48.740523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.198 [2024-07-15 18:41:48.740540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.198 [2024-07-15 18:41:48.749617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.198 [2024-07-15 18:41:48.749764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.198 [2024-07-15 18:41:48.749784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.459 [2024-07-15 18:41:48.759414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.459 [2024-07-15 18:41:48.759564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.459 [2024-07-15 18:41:48.759581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.459 [2024-07-15 18:41:48.768687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.459 [2024-07-15 18:41:48.768828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.459 [2024-07-15 18:41:48.768845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.459 [2024-07-15 18:41:48.777854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.459 [2024-07-15 18:41:48.777997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.459 [2024-07-15 18:41:48.778014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.459 [2024-07-15 18:41:48.787033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.459 [2024-07-15 18:41:48.787178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.459 [2024-07-15 18:41:48.787194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.459 [2024-07-15 18:41:48.796258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.459 [2024-07-15 18:41:48.796424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.459 [2024-07-15 18:41:48.796441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.459 [2024-07-15 18:41:48.805507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.459 [2024-07-15 18:41:48.805667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.459 [2024-07-15 18:41:48.805684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.459 [2024-07-15 18:41:48.814740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.459 [2024-07-15 18:41:48.814904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.459 [2024-07-15 18:41:48.814920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.459 [2024-07-15 18:41:48.823938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.459 [2024-07-15 18:41:48.824100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.459 [2024-07-15 18:41:48.824117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.459 [2024-07-15 18:41:48.833174] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.459 [2024-07-15 18:41:48.833316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.459 [2024-07-15 18:41:48.833336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.459 [2024-07-15 18:41:48.842395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.459 [2024-07-15 18:41:48.842559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.459 [2024-07-15 18:41:48.842576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.459 [2024-07-15 18:41:48.851635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.459 [2024-07-15 18:41:48.851796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.459 [2024-07-15 18:41:48.851812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.459 [2024-07-15 18:41:48.860827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.459 [2024-07-15 18:41:48.860971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.459 [2024-07-15 18:41:48.860987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.459 [2024-07-15 18:41:48.870004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.459 [2024-07-15 18:41:48.870147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.459 [2024-07-15 18:41:48.870163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.459 [2024-07-15 18:41:48.879225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.459 [2024-07-15 18:41:48.879375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.459 [2024-07-15 18:41:48.879408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.459 [2024-07-15 18:41:48.888474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.459 [2024-07-15 18:41:48.888645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.459 [2024-07-15 18:41:48.888661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.459 [2024-07-15 18:41:48.897645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.459 [2024-07-15 18:41:48.897787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.459 [2024-07-15 18:41:48.897804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.459 [2024-07-15 18:41:48.906811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.459 [2024-07-15 18:41:48.906954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.459 [2024-07-15 18:41:48.906971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.459 [2024-07-15 18:41:48.916033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.459 [2024-07-15 18:41:48.916196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.459 [2024-07-15 18:41:48.916212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.459 [2024-07-15 18:41:48.925229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.459 [2024-07-15 18:41:48.925379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.460 [2024-07-15 18:41:48.925395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.460 [2024-07-15 18:41:48.934447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.460 [2024-07-15 18:41:48.934590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.460 [2024-07-15 18:41:48.934606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.460 [2024-07-15 18:41:48.943651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.460 [2024-07-15 18:41:48.943811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.460 [2024-07-15 18:41:48.943828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.460 [2024-07-15 18:41:48.953049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.460 [2024-07-15 18:41:48.953210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.460 [2024-07-15 18:41:48.953227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.460 [2024-07-15 18:41:48.962397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.460 [2024-07-15 18:41:48.962559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.460 [2024-07-15 18:41:48.962576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.460 [2024-07-15 18:41:48.971725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.460 [2024-07-15 18:41:48.971870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.460 [2024-07-15 18:41:48.971886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.460 [2024-07-15 18:41:48.980934] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.460 [2024-07-15 18:41:48.981078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.460 [2024-07-15 18:41:48.981095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.460 [2024-07-15 18:41:48.990085] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.460 [2024-07-15 18:41:48.990226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.460 [2024-07-15 18:41:48.990243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.460 [2024-07-15 18:41:48.999281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.460 [2024-07-15 18:41:48.999448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.460 [2024-07-15 18:41:48.999464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.460 [2024-07-15 18:41:49.008486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.460 [2024-07-15 18:41:49.008658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.460 [2024-07-15 18:41:49.008675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.719 [2024-07-15 18:41:49.017928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.719 [2024-07-15 18:41:49.018070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.719 [2024-07-15 18:41:49.018086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.719 [2024-07-15 18:41:49.027263] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.719 [2024-07-15 18:41:49.027434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.719 [2024-07-15 18:41:49.027451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.719 [2024-07-15 18:41:49.036497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.719 [2024-07-15 18:41:49.036668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.719 [2024-07-15 18:41:49.036685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.719 [2024-07-15 18:41:49.045694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.719 [2024-07-15 18:41:49.045836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.719 [2024-07-15 18:41:49.045852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.719 [2024-07-15 18:41:49.054850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.719 [2024-07-15 18:41:49.054992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.719 [2024-07-15 18:41:49.055008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.719 [2024-07-15 18:41:49.064051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.719 [2024-07-15 18:41:49.064212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.719 [2024-07-15 18:41:49.064229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.719 [2024-07-15 18:41:49.073314] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.719 [2024-07-15 18:41:49.073463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.719 [2024-07-15 18:41:49.073482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.719 [2024-07-15 18:41:49.082540] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.719 [2024-07-15 18:41:49.082682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.719 [2024-07-15 18:41:49.082699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.720 [2024-07-15 18:41:49.091739] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.720 [2024-07-15 18:41:49.091899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.720 [2024-07-15 18:41:49.091916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.720 [2024-07-15 18:41:49.100948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.720 [2024-07-15 18:41:49.101090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.720 [2024-07-15 18:41:49.101106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.720 [2024-07-15 18:41:49.110109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.720 [2024-07-15 18:41:49.110250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.720 [2024-07-15 18:41:49.110266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.720 [2024-07-15 18:41:49.119312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.720 [2024-07-15 18:41:49.119482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.720 [2024-07-15 18:41:49.119499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.720 [2024-07-15 18:41:49.128698] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.720 [2024-07-15 18:41:49.128841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.720 [2024-07-15 18:41:49.128857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.720 [2024-07-15 18:41:49.137867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd114d0) with pdu=0x2000190fda78 00:27:03.720 [2024-07-15 18:41:49.138008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.720 [2024-07-15 18:41:49.138024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.720 00:27:03.720 Latency(us) 00:27:03.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:03.720 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:03.720 nvme0n1 : 2.00 27509.32 107.46 0.00 0.00 4645.59 4400.27 14917.24 00:27:03.720 =================================================================================================================== 00:27:03.720 Total : 27509.32 107.46 0.00 0.00 4645.59 4400.27 14917.24 00:27:03.720 0 00:27:03.720 18:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:03.720 18:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:03.720 18:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:03.720 | .driver_specific 00:27:03.720 | .nvme_error 00:27:03.720 | .status_code 00:27:03.720 | .command_transient_transport_error' 00:27:03.720 18:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:03.979 18:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 216 > 0 )) 00:27:03.979 18:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4064367 00:27:03.979 18:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 4064367 ']' 00:27:03.979 18:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 4064367 00:27:03.979 18:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:03.979 18:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:03.979 18:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4064367 00:27:03.979 18:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:03.979 18:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:03.979 18:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4064367' 00:27:03.979 killing process with pid 4064367 00:27:03.979 18:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 4064367 00:27:03.979 Received shutdown signal, test time was about 2.000000 seconds 00:27:03.979 00:27:03.979 Latency(us) 00:27:03.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:03.979 =================================================================================================================== 00:27:03.979 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:03.979 18:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 4064367 00:27:04.239 18:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:04.239 18:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:04.239 18:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:04.239 18:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:04.239 18:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:04.239 18:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4064968 00:27:04.239 18:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4064968 /var/tmp/bperf.sock 00:27:04.239 18:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:04.239 18:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 4064968 ']' 00:27:04.239 18:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:04.239 18:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:04.239 18:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:04.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:04.239 18:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:04.239 18:41:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:04.239 [2024-07-15 18:41:49.616626] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:27:04.239 [2024-07-15 18:41:49.616671] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4064968 ] 00:27:04.239 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:04.239 Zero copy mechanism will not be used. 00:27:04.239 EAL: No free 2048 kB hugepages reported on node 1 00:27:04.239 [2024-07-15 18:41:49.680011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.239 [2024-07-15 18:41:49.746981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.194 18:41:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:05.194 18:41:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:27:05.194 18:41:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:05.194 18:41:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:05.194 18:41:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:05.194 18:41:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.194 18:41:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:05.194 18:41:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.194 18:41:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:05.194 18:41:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:05.490 nvme0n1 00:27:05.490 18:41:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:05.490 18:41:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.490 18:41:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:05.490 18:41:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.490 18:41:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:05.490 18:41:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:05.490 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:05.490 Zero copy mechanism will not be used. 00:27:05.490 Running I/O for 2 seconds... 00:27:05.490 [2024-07-15 18:41:51.025841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.490 [2024-07-15 18:41:51.026221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.491 [2024-07-15 18:41:51.026251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.491 [2024-07-15 18:41:51.030305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.491 [2024-07-15 18:41:51.030667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.491 [2024-07-15 18:41:51.030691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.491 [2024-07-15 18:41:51.034707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.491 [2024-07-15 18:41:51.035101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.491 [2024-07-15 18:41:51.035122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.491 [2024-07-15 18:41:51.039204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.491 [2024-07-15 18:41:51.039607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.491 [2024-07-15 18:41:51.039631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.765 [2024-07-15 18:41:51.043828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.765 [2024-07-15 18:41:51.044221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.765 [2024-07-15 18:41:51.044243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.765 [2024-07-15 18:41:51.048334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.765 [2024-07-15 18:41:51.048744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.765 [2024-07-15 18:41:51.048769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.765 [2024-07-15 18:41:51.052805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.765 [2024-07-15 18:41:51.053168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.765 [2024-07-15 18:41:51.053189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.765 [2024-07-15 18:41:51.057218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.765 [2024-07-15 18:41:51.057573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.765 [2024-07-15 18:41:51.057593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.765 [2024-07-15 18:41:51.061682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.765 [2024-07-15 18:41:51.062049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.765 [2024-07-15 18:41:51.062070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.765 [2024-07-15 18:41:51.066588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.765 [2024-07-15 18:41:51.066961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.765 [2024-07-15 18:41:51.066981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.765 [2024-07-15 18:41:51.071437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.765 [2024-07-15 18:41:51.071786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.765 [2024-07-15 18:41:51.071805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.765 [2024-07-15 18:41:51.075954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.765 [2024-07-15 18:41:51.076317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.765 [2024-07-15 18:41:51.076343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.765 [2024-07-15 18:41:51.080938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.765 [2024-07-15 18:41:51.081296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.765 [2024-07-15 18:41:51.081315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.765 [2024-07-15 18:41:51.085986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.765 [2024-07-15 18:41:51.086350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.765 [2024-07-15 18:41:51.086369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.765 [2024-07-15 18:41:51.091244] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.765 [2024-07-15 18:41:51.091470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.765 [2024-07-15 18:41:51.091489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.765 [2024-07-15 18:41:51.096187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.765 [2024-07-15 18:41:51.096543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.765 [2024-07-15 18:41:51.096563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.765 [2024-07-15 18:41:51.101430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.765 [2024-07-15 18:41:51.101757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.765 [2024-07-15 18:41:51.101775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.765 [2024-07-15 18:41:51.106495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.765 [2024-07-15 18:41:51.106828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.765 [2024-07-15 18:41:51.106846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.765 [2024-07-15 18:41:51.112051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.765 [2024-07-15 18:41:51.112404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.765 [2024-07-15 18:41:51.112424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.765 [2024-07-15 18:41:51.117208] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.765 [2024-07-15 18:41:51.117541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.765 [2024-07-15 18:41:51.117562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.765 [2024-07-15 18:41:51.122321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.765 [2024-07-15 18:41:51.122669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.765 [2024-07-15 18:41:51.122688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.765 [2024-07-15 18:41:51.127614] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.765 [2024-07-15 18:41:51.127960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.765 [2024-07-15 18:41:51.127979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.765 [2024-07-15 18:41:51.132859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.766 [2024-07-15 18:41:51.133189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.766 [2024-07-15 18:41:51.133207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.766 [2024-07-15 18:41:51.137784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.766 [2024-07-15 18:41:51.138125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.766 [2024-07-15 18:41:51.138143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.766 [2024-07-15 18:41:51.143142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.766 [2024-07-15 18:41:51.143499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.766 [2024-07-15 18:41:51.143518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.766 [2024-07-15 18:41:51.148298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.766 [2024-07-15 18:41:51.148642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.766 [2024-07-15 18:41:51.148661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.766 [2024-07-15 18:41:51.153382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.766 [2024-07-15 18:41:51.153721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.766 [2024-07-15 18:41:51.153739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.766 [2024-07-15 18:41:51.158659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.766 [2024-07-15 18:41:51.158990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.766 [2024-07-15 18:41:51.159009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.766 [2024-07-15 18:41:51.163841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.766 [2024-07-15 18:41:51.164190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.766 [2024-07-15 18:41:51.164209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.766 [2024-07-15 18:41:51.169037] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.766 [2024-07-15 18:41:51.169375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.766 [2024-07-15 18:41:51.169393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.766 [2024-07-15 18:41:51.174104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.766 [2024-07-15 18:41:51.174424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.766 [2024-07-15 18:41:51.174443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.766 [2024-07-15 18:41:51.179640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.766 [2024-07-15 18:41:51.179969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.766 [2024-07-15 18:41:51.179987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.766 [2024-07-15 18:41:51.184668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.766 [2024-07-15 18:41:51.184991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.766 [2024-07-15 18:41:51.185010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.766 [2024-07-15 18:41:51.189802] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.766 [2024-07-15 18:41:51.190127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.766 [2024-07-15 18:41:51.190150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.766 [2024-07-15 18:41:51.195180] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.766 [2024-07-15 18:41:51.195499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.766 [2024-07-15 18:41:51.195520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.766 [2024-07-15 18:41:51.200319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.766 [2024-07-15 18:41:51.200644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.766 [2024-07-15 18:41:51.200664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.766 [2024-07-15 18:41:51.205504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.766 [2024-07-15 18:41:51.205826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.766 [2024-07-15 18:41:51.205846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.766 [2024-07-15 18:41:51.210654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.766 [2024-07-15 18:41:51.210987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.766 [2024-07-15 18:41:51.211007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.766 [2024-07-15 18:41:51.215349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.766 [2024-07-15 18:41:51.215686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.766 [2024-07-15 18:41:51.215706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.766 [2024-07-15 18:41:51.220058] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.766 [2024-07-15 18:41:51.220390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.766 [2024-07-15 18:41:51.220409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.766 [2024-07-15 18:41:51.224571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.766 [2024-07-15 18:41:51.224883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.766 [2024-07-15 18:41:51.224903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.766 [2024-07-15 18:41:51.228811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.766 [2024-07-15 18:41:51.229123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.766 [2024-07-15 18:41:51.229142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.766 [2024-07-15 18:41:51.233474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.766 [2024-07-15 18:41:51.233786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.766 [2024-07-15 18:41:51.233806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.766 [2024-07-15 18:41:51.237914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.766 [2024-07-15 18:41:51.238232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.766 [2024-07-15 18:41:51.238250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.766 [2024-07-15 18:41:51.242392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.766 [2024-07-15 18:41:51.242702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.766 [2024-07-15 18:41:51.242721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.766 [2024-07-15 18:41:51.246856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.766 [2024-07-15 18:41:51.247171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.766 [2024-07-15 18:41:51.247193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.766 [2024-07-15 18:41:51.251819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.766 [2024-07-15 18:41:51.252140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.766 [2024-07-15 18:41:51.252159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.766 [2024-07-15 18:41:51.256607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.766 [2024-07-15 18:41:51.256932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.766 [2024-07-15 18:41:51.256951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.766 [2024-07-15 18:41:51.261387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.766 [2024-07-15 18:41:51.261751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.766 [2024-07-15 18:41:51.261774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.766 [2024-07-15 18:41:51.266147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.766 [2024-07-15 18:41:51.266466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.766 [2024-07-15 18:41:51.266486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.766 [2024-07-15 18:41:51.271043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.766 [2024-07-15 18:41:51.271359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.766 [2024-07-15 18:41:51.271377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.766 [2024-07-15 18:41:51.275938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.767 [2024-07-15 18:41:51.276262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.767 [2024-07-15 18:41:51.276280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.767 [2024-07-15 18:41:51.281331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.767 [2024-07-15 18:41:51.281641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.767 [2024-07-15 18:41:51.281660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.767 [2024-07-15 18:41:51.286113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.767 [2024-07-15 18:41:51.286435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.767 [2024-07-15 18:41:51.286454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.767 [2024-07-15 18:41:51.290387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.767 [2024-07-15 18:41:51.290703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.767 [2024-07-15 18:41:51.290722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.767 [2024-07-15 18:41:51.294808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.767 [2024-07-15 18:41:51.295134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.767 [2024-07-15 18:41:51.295153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.767 [2024-07-15 18:41:51.299277] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.767 [2024-07-15 18:41:51.299603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.767 [2024-07-15 18:41:51.299622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.767 [2024-07-15 18:41:51.303668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.767 [2024-07-15 18:41:51.303996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.767 [2024-07-15 18:41:51.304015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.767 [2024-07-15 18:41:51.308106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.767 [2024-07-15 18:41:51.308432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.767 [2024-07-15 18:41:51.308451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.767 [2024-07-15 18:41:51.312554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.767 [2024-07-15 18:41:51.312861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.767 [2024-07-15 18:41:51.312879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.767 [2024-07-15 18:41:51.316684] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:05.767 [2024-07-15 18:41:51.316968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.767 [2024-07-15 18:41:51.316990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.028 [2024-07-15 18:41:51.320598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.028 [2024-07-15 18:41:51.320861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.028 [2024-07-15 18:41:51.320883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.028 [2024-07-15 18:41:51.324285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.028 [2024-07-15 18:41:51.324552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.028 [2024-07-15 18:41:51.324577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.028 [2024-07-15 18:41:51.327911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.028 [2024-07-15 18:41:51.328158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.028 [2024-07-15 18:41:51.328179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.028 [2024-07-15 18:41:51.331554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.028 [2024-07-15 18:41:51.331827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.028 [2024-07-15 18:41:51.331847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.028 [2024-07-15 18:41:51.335201] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.028 [2024-07-15 18:41:51.335445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.028 [2024-07-15 18:41:51.335464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.028 [2024-07-15 18:41:51.338864] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.028 [2024-07-15 18:41:51.339134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.028 [2024-07-15 18:41:51.339152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.028 [2024-07-15 18:41:51.342453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.028 [2024-07-15 18:41:51.342714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.028 [2024-07-15 18:41:51.342733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.028 [2024-07-15 18:41:51.346365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.028 [2024-07-15 18:41:51.346633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.028 [2024-07-15 18:41:51.346652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.028 [2024-07-15 18:41:51.351261] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.028 [2024-07-15 18:41:51.351605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.028 [2024-07-15 18:41:51.351623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.028 [2024-07-15 18:41:51.356491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.028 [2024-07-15 18:41:51.356826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.029 [2024-07-15 18:41:51.356844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.029 [2024-07-15 18:41:51.361758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.029 [2024-07-15 18:41:51.362031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.029 [2024-07-15 18:41:51.362049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.029 [2024-07-15 18:41:51.367044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.029 [2024-07-15 18:41:51.367365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.029 [2024-07-15 18:41:51.367384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.029 [2024-07-15 18:41:51.372145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.029 [2024-07-15 18:41:51.372430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.029 [2024-07-15 18:41:51.372448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.029 [2024-07-15 18:41:51.377571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.029 [2024-07-15 18:41:51.377935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.029 [2024-07-15 18:41:51.377954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.029 [2024-07-15 18:41:51.382948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.029 [2024-07-15 18:41:51.383231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.029 [2024-07-15 18:41:51.383249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.029 [2024-07-15 18:41:51.387074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.029 [2024-07-15 18:41:51.387347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.029 [2024-07-15 18:41:51.387366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.029 [2024-07-15 18:41:51.390778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.029 [2024-07-15 18:41:51.391039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.029 [2024-07-15 18:41:51.391058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.029 [2024-07-15 18:41:51.394425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.029 [2024-07-15 18:41:51.394689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.029 [2024-07-15 18:41:51.394708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.029 [2024-07-15 18:41:51.398062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.029 [2024-07-15 18:41:51.398309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.029 [2024-07-15 18:41:51.398327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.029 [2024-07-15 18:41:51.401727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.029 [2024-07-15 18:41:51.401983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.029 [2024-07-15 18:41:51.402001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.029 [2024-07-15 18:41:51.405373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.029 [2024-07-15 18:41:51.405627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.029 [2024-07-15 18:41:51.405644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.029 [2024-07-15 18:41:51.408946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.029 [2024-07-15 18:41:51.409203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.029 [2024-07-15 18:41:51.409221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.029 [2024-07-15 18:41:51.412583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.029 [2024-07-15 18:41:51.412820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.029 [2024-07-15 18:41:51.412838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.029 [2024-07-15 18:41:51.416618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.029 [2024-07-15 18:41:51.416846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.029 [2024-07-15 18:41:51.416865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.029 [2024-07-15 18:41:51.421156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.029 [2024-07-15 18:41:51.421390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.029 [2024-07-15 18:41:51.421408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.029 [2024-07-15 18:41:51.425607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.029 [2024-07-15 18:41:51.425847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.029 [2024-07-15 18:41:51.425864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.029 [2024-07-15 18:41:51.429449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.029 [2024-07-15 18:41:51.429676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.029 [2024-07-15 18:41:51.429694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.029 [2024-07-15 18:41:51.433439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.029 [2024-07-15 18:41:51.433676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.029 [2024-07-15 18:41:51.433699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.029 [2024-07-15 18:41:51.437302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.029 [2024-07-15 18:41:51.437529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.029 [2024-07-15 18:41:51.437547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.029 [2024-07-15 18:41:51.441568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.029 [2024-07-15 18:41:51.441792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.029 [2024-07-15 18:41:51.441811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.029 [2024-07-15 18:41:51.445652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.029 [2024-07-15 18:41:51.445886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.029 [2024-07-15 18:41:51.445904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.029 [2024-07-15 18:41:51.450654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.029 [2024-07-15 18:41:51.450988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.029 [2024-07-15 18:41:51.451006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.029 [2024-07-15 18:41:51.456189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.029 [2024-07-15 18:41:51.456445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.029 [2024-07-15 18:41:51.456464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.029 [2024-07-15 18:41:51.461585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.029 [2024-07-15 18:41:51.461908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.029 [2024-07-15 18:41:51.461926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.029 [2024-07-15 18:41:51.467960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.029 [2024-07-15 18:41:51.468197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.029 [2024-07-15 18:41:51.468215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.029 [2024-07-15 18:41:51.472514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.029 [2024-07-15 18:41:51.472778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.029 [2024-07-15 18:41:51.472796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.029 [2024-07-15 18:41:51.476202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.029 [2024-07-15 18:41:51.476451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.029 [2024-07-15 18:41:51.476470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.029 [2024-07-15 18:41:51.479916] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.029 [2024-07-15 18:41:51.480151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.029 [2024-07-15 18:41:51.480169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.029 [2024-07-15 18:41:51.483591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.029 [2024-07-15 18:41:51.483820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.029 [2024-07-15 18:41:51.483838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.030 [2024-07-15 18:41:51.487258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.030 [2024-07-15 18:41:51.487498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.030 [2024-07-15 18:41:51.487516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.030 [2024-07-15 18:41:51.490875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.030 [2024-07-15 18:41:51.491107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.030 [2024-07-15 18:41:51.491125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.030 [2024-07-15 18:41:51.494472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.030 [2024-07-15 18:41:51.494737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.030 [2024-07-15 18:41:51.494755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.030 [2024-07-15 18:41:51.498103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.030 [2024-07-15 18:41:51.498353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.030 [2024-07-15 18:41:51.498372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.030 [2024-07-15 18:41:51.501742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.030 [2024-07-15 18:41:51.501977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.030 [2024-07-15 18:41:51.501995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.030 [2024-07-15 18:41:51.505391] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.030 [2024-07-15 18:41:51.505670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.030 [2024-07-15 18:41:51.505688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.030 [2024-07-15 18:41:51.508955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.030 [2024-07-15 18:41:51.509208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.030 [2024-07-15 18:41:51.509226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.030 [2024-07-15 18:41:51.512547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.030 [2024-07-15 18:41:51.512798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.030 [2024-07-15 18:41:51.512816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.030 [2024-07-15 18:41:51.516162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.030 [2024-07-15 18:41:51.516406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.030 [2024-07-15 18:41:51.516424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.030 [2024-07-15 18:41:51.519737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.030 [2024-07-15 18:41:51.519968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.030 [2024-07-15 18:41:51.519988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.030 [2024-07-15 18:41:51.523378] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.030 [2024-07-15 18:41:51.523620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.030 [2024-07-15 18:41:51.523639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.030 [2024-07-15 18:41:51.526978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.030 [2024-07-15 18:41:51.527245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.030 [2024-07-15 18:41:51.527265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.030 [2024-07-15 18:41:51.530634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.030 [2024-07-15 18:41:51.530875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.030 [2024-07-15 18:41:51.530893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.030 [2024-07-15 18:41:51.534243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.030 [2024-07-15 18:41:51.534470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.030 [2024-07-15 18:41:51.534489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.030 [2024-07-15 18:41:51.537879] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.030 [2024-07-15 18:41:51.538132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.030 [2024-07-15 18:41:51.538154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.030 [2024-07-15 18:41:51.541476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.030 [2024-07-15 18:41:51.541718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.030 [2024-07-15 18:41:51.541736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.030 [2024-07-15 18:41:51.545072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.030 [2024-07-15 18:41:51.545311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.030 [2024-07-15 18:41:51.545330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.030 [2024-07-15 18:41:51.548683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.030 [2024-07-15 18:41:51.548921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.030 [2024-07-15 18:41:51.548939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.030 [2024-07-15 18:41:51.552260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.030 [2024-07-15 18:41:51.552509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.030 [2024-07-15 18:41:51.552529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.030 [2024-07-15 18:41:51.555876] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.030 [2024-07-15 18:41:51.556124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.030 [2024-07-15 18:41:51.556143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.030 [2024-07-15 18:41:51.559469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.030 [2024-07-15 18:41:51.559720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.030 [2024-07-15 18:41:51.559738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.030 [2024-07-15 18:41:51.563124] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.030 [2024-07-15 18:41:51.563378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.030 [2024-07-15 18:41:51.563397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.030 [2024-07-15 18:41:51.566799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.030 [2024-07-15 18:41:51.567062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.030 [2024-07-15 18:41:51.567081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.030 [2024-07-15 18:41:51.570395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.030 [2024-07-15 18:41:51.570657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.030 [2024-07-15 18:41:51.570675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.030 [2024-07-15 18:41:51.573982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.030 [2024-07-15 18:41:51.574214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.030 [2024-07-15 18:41:51.574231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.030 [2024-07-15 18:41:51.577596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.030 [2024-07-15 18:41:51.577829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.030 [2024-07-15 18:41:51.577848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.030 [2024-07-15 18:41:51.581463] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.030 [2024-07-15 18:41:51.581803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.030 [2024-07-15 18:41:51.581824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.290 [2024-07-15 18:41:51.585890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.290 [2024-07-15 18:41:51.585959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.290 [2024-07-15 18:41:51.585980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.290 [2024-07-15 18:41:51.590348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.290 [2024-07-15 18:41:51.590457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.290 [2024-07-15 18:41:51.590476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.290 [2024-07-15 18:41:51.594406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.290 [2024-07-15 18:41:51.594487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.290 [2024-07-15 18:41:51.594506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.290 [2024-07-15 18:41:51.598914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.290 [2024-07-15 18:41:51.599021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.290 [2024-07-15 18:41:51.599039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.290 [2024-07-15 18:41:51.603252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.290 [2024-07-15 18:41:51.603369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.290 [2024-07-15 18:41:51.603387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.290 [2024-07-15 18:41:51.607104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.290 [2024-07-15 18:41:51.607187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.290 [2024-07-15 18:41:51.607205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.290 [2024-07-15 18:41:51.610808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.290 [2024-07-15 18:41:51.610893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.290 [2024-07-15 18:41:51.610911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.290 [2024-07-15 18:41:51.614418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.290 [2024-07-15 18:41:51.614497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.290 [2024-07-15 18:41:51.614515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.290 [2024-07-15 18:41:51.618031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.290 [2024-07-15 18:41:51.618108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.291 [2024-07-15 18:41:51.618126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.291 [2024-07-15 18:41:51.621950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.291 [2024-07-15 18:41:51.622057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.291 [2024-07-15 18:41:51.622074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.291 [2024-07-15 18:41:51.627267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.291 [2024-07-15 18:41:51.627418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.291 [2024-07-15 18:41:51.627436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.291 [2024-07-15 18:41:51.632629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.291 [2024-07-15 18:41:51.632724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.291 [2024-07-15 18:41:51.632740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.291 [2024-07-15 18:41:51.638967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.291 [2024-07-15 18:41:51.639154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.291 [2024-07-15 18:41:51.639171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.291 [2024-07-15 18:41:51.644807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.291 [2024-07-15 18:41:51.644889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.291 [2024-07-15 18:41:51.644913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.291 [2024-07-15 18:41:51.651043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.291 [2024-07-15 18:41:51.651215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.291 [2024-07-15 18:41:51.651233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.291 [2024-07-15 18:41:51.657883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.291 [2024-07-15 18:41:51.658043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.291 [2024-07-15 18:41:51.658060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.291 [2024-07-15 18:41:51.663924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.291 [2024-07-15 18:41:51.664033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.291 [2024-07-15 18:41:51.664051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.291 [2024-07-15 18:41:51.669487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.291 [2024-07-15 18:41:51.669746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.291 [2024-07-15 18:41:51.669765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.291 [2024-07-15 18:41:51.675318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.291 [2024-07-15 18:41:51.675467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.291 [2024-07-15 18:41:51.675484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.291 [2024-07-15 18:41:51.681288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.291 [2024-07-15 18:41:51.681432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.291 [2024-07-15 18:41:51.681449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.291 [2024-07-15 18:41:51.686840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.291 [2024-07-15 18:41:51.687019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.291 [2024-07-15 18:41:51.687037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.291 [2024-07-15 18:41:51.692208] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.291 [2024-07-15 18:41:51.692381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.291 [2024-07-15 18:41:51.692398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.291 [2024-07-15 18:41:51.697696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.291 [2024-07-15 18:41:51.697884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.291 [2024-07-15 18:41:51.697902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.291 [2024-07-15 18:41:51.702950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.291 [2024-07-15 18:41:51.703104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.291 [2024-07-15 18:41:51.703121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.291 [2024-07-15 18:41:51.708236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.291 [2024-07-15 18:41:51.708430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.291 [2024-07-15 18:41:51.708449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.291 [2024-07-15 18:41:51.713366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.291 [2024-07-15 18:41:51.713511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.291 [2024-07-15 18:41:51.713528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.291 [2024-07-15 18:41:51.718530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.291 [2024-07-15 18:41:51.718722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.291 [2024-07-15 18:41:51.718746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.291 [2024-07-15 18:41:51.723615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.291 [2024-07-15 18:41:51.723792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.291 [2024-07-15 18:41:51.723810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.291 [2024-07-15 18:41:51.728596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.291 [2024-07-15 18:41:51.728731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.291 [2024-07-15 18:41:51.728748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.291 [2024-07-15 18:41:51.733476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.291 [2024-07-15 18:41:51.733637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.291 [2024-07-15 18:41:51.733653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.291 [2024-07-15 18:41:51.738673] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.291 [2024-07-15 18:41:51.738844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.291 [2024-07-15 18:41:51.738862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.291 [2024-07-15 18:41:51.744140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.291 [2024-07-15 18:41:51.744228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.291 [2024-07-15 18:41:51.744247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.291 [2024-07-15 18:41:51.749614] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.291 [2024-07-15 18:41:51.749790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.291 [2024-07-15 18:41:51.749808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.291 [2024-07-15 18:41:51.754949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.291 [2024-07-15 18:41:51.755101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.291 [2024-07-15 18:41:51.755118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.291 [2024-07-15 18:41:51.760226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.291 [2024-07-15 18:41:51.760393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.291 [2024-07-15 18:41:51.760411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.291 [2024-07-15 18:41:51.765509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.291 [2024-07-15 18:41:51.765694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.291 [2024-07-15 18:41:51.765710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.291 [2024-07-15 18:41:51.770917] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.291 [2024-07-15 18:41:51.771050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.291 [2024-07-15 18:41:51.771066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.292 [2024-07-15 18:41:51.776222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.292 [2024-07-15 18:41:51.776384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.292 [2024-07-15 18:41:51.776401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.292 [2024-07-15 18:41:51.781638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.292 [2024-07-15 18:41:51.781827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.292 [2024-07-15 18:41:51.781844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.292 [2024-07-15 18:41:51.786758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.292 [2024-07-15 18:41:51.786906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.292 [2024-07-15 18:41:51.786927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.292 [2024-07-15 18:41:51.791846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.292 [2024-07-15 18:41:51.792014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.292 [2024-07-15 18:41:51.792031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.292 [2024-07-15 18:41:51.797268] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.292 [2024-07-15 18:41:51.797443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.292 [2024-07-15 18:41:51.797459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.292 [2024-07-15 18:41:51.802751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.292 [2024-07-15 18:41:51.802913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.292 [2024-07-15 18:41:51.802930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.292 [2024-07-15 18:41:51.808012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.292 [2024-07-15 18:41:51.808097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.292 [2024-07-15 18:41:51.808114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.292 [2024-07-15 18:41:51.813265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.292 [2024-07-15 18:41:51.813468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.292 [2024-07-15 18:41:51.813485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.292 [2024-07-15 18:41:51.818422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.292 [2024-07-15 18:41:51.818515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.292 [2024-07-15 18:41:51.818532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.292 [2024-07-15 18:41:51.824023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.292 [2024-07-15 18:41:51.824235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.292 [2024-07-15 18:41:51.824254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.292 [2024-07-15 18:41:51.829370] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.292 [2024-07-15 18:41:51.829592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.292 [2024-07-15 18:41:51.829610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.292 [2024-07-15 18:41:51.835282] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.292 [2024-07-15 18:41:51.835461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.292 [2024-07-15 18:41:51.835479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.292 [2024-07-15 18:41:51.840647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.292 [2024-07-15 18:41:51.840739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.292 [2024-07-15 18:41:51.840756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.292 [2024-07-15 18:41:51.845851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.292 [2024-07-15 18:41:51.845935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.292 [2024-07-15 18:41:51.845954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.552 [2024-07-15 18:41:51.851084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.552 [2024-07-15 18:41:51.851151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.552 [2024-07-15 18:41:51.851172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.552 [2024-07-15 18:41:51.855468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.552 [2024-07-15 18:41:51.855556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.552 [2024-07-15 18:41:51.855575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.552 [2024-07-15 18:41:51.859502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.552 [2024-07-15 18:41:51.859618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.552 [2024-07-15 18:41:51.859635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.552 [2024-07-15 18:41:51.863550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.552 [2024-07-15 18:41:51.863660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.552 [2024-07-15 18:41:51.863678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.552 [2024-07-15 18:41:51.867368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.552 [2024-07-15 18:41:51.867460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.552 [2024-07-15 18:41:51.867477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.552 [2024-07-15 18:41:51.871216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.552 [2024-07-15 18:41:51.871280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.552 [2024-07-15 18:41:51.871298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.552 [2024-07-15 18:41:51.875326] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.552 [2024-07-15 18:41:51.875452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.552 [2024-07-15 18:41:51.875469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.552 [2024-07-15 18:41:51.879961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.552 [2024-07-15 18:41:51.880041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.552 [2024-07-15 18:41:51.880058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.552 [2024-07-15 18:41:51.884395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.552 [2024-07-15 18:41:51.884506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.552 [2024-07-15 18:41:51.884523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.553 [2024-07-15 18:41:51.888586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.553 [2024-07-15 18:41:51.888664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.553 [2024-07-15 18:41:51.888681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.553 [2024-07-15 18:41:51.893112] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.553 [2024-07-15 18:41:51.893164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.553 [2024-07-15 18:41:51.893181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.553 [2024-07-15 18:41:51.897348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.553 [2024-07-15 18:41:51.897450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.553 [2024-07-15 18:41:51.897467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.553 [2024-07-15 18:41:51.901053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.553 [2024-07-15 18:41:51.901136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.553 [2024-07-15 18:41:51.901153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.553 [2024-07-15 18:41:51.904761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.553 [2024-07-15 18:41:51.904843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.553 [2024-07-15 18:41:51.904860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.553 [2024-07-15 18:41:51.908432] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.553 [2024-07-15 18:41:51.908534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.553 [2024-07-15 18:41:51.908554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.553 [2024-07-15 18:41:51.912195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.553 [2024-07-15 18:41:51.912306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.553 [2024-07-15 18:41:51.912323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.553 [2024-07-15 18:41:51.915919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.553 [2024-07-15 18:41:51.915990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.553 [2024-07-15 18:41:51.916007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.553 [2024-07-15 18:41:51.919623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.553 [2024-07-15 18:41:51.919713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.553 [2024-07-15 18:41:51.919731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.553 [2024-07-15 18:41:51.923324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.553 [2024-07-15 18:41:51.923434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.553 [2024-07-15 18:41:51.923451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.553 [2024-07-15 18:41:51.926988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.553 [2024-07-15 18:41:51.927089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.553 [2024-07-15 18:41:51.927106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.553 [2024-07-15 18:41:51.930727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.553 [2024-07-15 18:41:51.930830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.553 [2024-07-15 18:41:51.930848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.553 [2024-07-15 18:41:51.934351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.553 [2024-07-15 18:41:51.934460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.553 [2024-07-15 18:41:51.934477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.553 [2024-07-15 18:41:51.938043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.553 [2024-07-15 18:41:51.938127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.553 [2024-07-15 18:41:51.938144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.553 [2024-07-15 18:41:51.941686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.553 [2024-07-15 18:41:51.941785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.553 [2024-07-15 18:41:51.941806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.553 [2024-07-15 18:41:51.945287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.553 [2024-07-15 18:41:51.945392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.553 [2024-07-15 18:41:51.945409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.553 [2024-07-15 18:41:51.948996] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.553 [2024-07-15 18:41:51.949107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.553 [2024-07-15 18:41:51.949124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.553 [2024-07-15 18:41:51.952762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.553 [2024-07-15 18:41:51.952871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.553 [2024-07-15 18:41:51.952888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.553 [2024-07-15 18:41:51.956534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.553 [2024-07-15 18:41:51.956638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.553 [2024-07-15 18:41:51.956656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.553 [2024-07-15 18:41:51.960254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.553 [2024-07-15 18:41:51.960341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.553 [2024-07-15 18:41:51.960358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.553 [2024-07-15 18:41:51.963965] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.553 [2024-07-15 18:41:51.964053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.553 [2024-07-15 18:41:51.964070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.553 [2024-07-15 18:41:51.967697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.553 [2024-07-15 18:41:51.967799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.553 [2024-07-15 18:41:51.967816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.553 [2024-07-15 18:41:51.971412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.553 [2024-07-15 18:41:51.971498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.553 [2024-07-15 18:41:51.971514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.553 [2024-07-15 18:41:51.975110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.553 [2024-07-15 18:41:51.975209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.553 [2024-07-15 18:41:51.975225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.553 [2024-07-15 18:41:51.979278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.553 [2024-07-15 18:41:51.979379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.553 [2024-07-15 18:41:51.979397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.553 [2024-07-15 18:41:51.983198] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.553 [2024-07-15 18:41:51.983283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.553 [2024-07-15 18:41:51.983300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.553 [2024-07-15 18:41:51.986891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.553 [2024-07-15 18:41:51.986993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.553 [2024-07-15 18:41:51.987010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.553 [2024-07-15 18:41:51.990626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.553 [2024-07-15 18:41:51.990715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.553 [2024-07-15 18:41:51.990732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.553 [2024-07-15 18:41:51.994285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.553 [2024-07-15 18:41:51.994379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.553 [2024-07-15 18:41:51.994396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.553 [2024-07-15 18:41:51.998482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.554 [2024-07-15 18:41:51.998568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.554 [2024-07-15 18:41:51.998585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.554 [2024-07-15 18:41:52.002958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.554 [2024-07-15 18:41:52.003039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.554 [2024-07-15 18:41:52.003056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.554 [2024-07-15 18:41:52.007228] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.554 [2024-07-15 18:41:52.007297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.554 [2024-07-15 18:41:52.007314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.554 [2024-07-15 18:41:52.011119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.554 [2024-07-15 18:41:52.011259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.554 [2024-07-15 18:41:52.011276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.554 [2024-07-15 18:41:52.014926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.554 [2024-07-15 18:41:52.015011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.554 [2024-07-15 18:41:52.015028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.554 [2024-07-15 18:41:52.018889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.554 [2024-07-15 18:41:52.018998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.554 [2024-07-15 18:41:52.019016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.554 [2024-07-15 18:41:52.022856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.554 [2024-07-15 18:41:52.022950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.554 [2024-07-15 18:41:52.022967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.554 [2024-07-15 18:41:52.026874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.554 [2024-07-15 18:41:52.026959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.554 [2024-07-15 18:41:52.026977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.554 [2024-07-15 18:41:52.030911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.554 [2024-07-15 18:41:52.031009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.554 [2024-07-15 18:41:52.031028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.554 [2024-07-15 18:41:52.034858] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.554 [2024-07-15 18:41:52.034946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.554 [2024-07-15 18:41:52.034963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.554 [2024-07-15 18:41:52.039387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.554 [2024-07-15 18:41:52.039504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.554 [2024-07-15 18:41:52.039522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.554 [2024-07-15 18:41:52.043890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.554 [2024-07-15 18:41:52.043942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.554 [2024-07-15 18:41:52.043962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.554 [2024-07-15 18:41:52.047674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.554 [2024-07-15 18:41:52.047761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.554 [2024-07-15 18:41:52.047778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.554 [2024-07-15 18:41:52.051408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.554 [2024-07-15 18:41:52.051490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.554 [2024-07-15 18:41:52.051507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.554 [2024-07-15 18:41:52.055137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.554 [2024-07-15 18:41:52.055231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.554 [2024-07-15 18:41:52.055248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.554 [2024-07-15 18:41:52.058999] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.554 [2024-07-15 18:41:52.059095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.554 [2024-07-15 18:41:52.059112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.554 [2024-07-15 18:41:52.063342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.554 [2024-07-15 18:41:52.063414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.554 [2024-07-15 18:41:52.063431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.554 [2024-07-15 18:41:52.067856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.554 [2024-07-15 18:41:52.067921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.554 [2024-07-15 18:41:52.067938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.554 [2024-07-15 18:41:52.073287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.554 [2024-07-15 18:41:52.073458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.554 [2024-07-15 18:41:52.073476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.554 [2024-07-15 18:41:52.079724] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.554 [2024-07-15 18:41:52.079856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.554 [2024-07-15 18:41:52.079873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.554 [2024-07-15 18:41:52.085824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.554 [2024-07-15 18:41:52.085905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.554 [2024-07-15 18:41:52.085922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.554 [2024-07-15 18:41:52.091281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.554 [2024-07-15 18:41:52.091355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.554 [2024-07-15 18:41:52.091373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.554 [2024-07-15 18:41:52.095236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.554 [2024-07-15 18:41:52.095327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.554 [2024-07-15 18:41:52.095351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.554 [2024-07-15 18:41:52.099129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.554 [2024-07-15 18:41:52.099210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.554 [2024-07-15 18:41:52.099228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.554 [2024-07-15 18:41:52.102855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.554 [2024-07-15 18:41:52.102943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.554 [2024-07-15 18:41:52.102960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.554 [2024-07-15 18:41:52.106840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.554 [2024-07-15 18:41:52.106922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.554 [2024-07-15 18:41:52.106941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.814 [2024-07-15 18:41:52.110738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.814 [2024-07-15 18:41:52.110864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.814 [2024-07-15 18:41:52.110883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.814 [2024-07-15 18:41:52.114700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.814 [2024-07-15 18:41:52.114790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.814 [2024-07-15 18:41:52.114809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.814 [2024-07-15 18:41:52.118840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.814 [2024-07-15 18:41:52.118956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.814 [2024-07-15 18:41:52.118974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.814 [2024-07-15 18:41:52.122831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.814 [2024-07-15 18:41:52.122907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.814 [2024-07-15 18:41:52.122925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.814 [2024-07-15 18:41:52.126889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.814 [2024-07-15 18:41:52.126965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.814 [2024-07-15 18:41:52.126984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.814 [2024-07-15 18:41:52.130964] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.814 [2024-07-15 18:41:52.131042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.814 [2024-07-15 18:41:52.131061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.814 [2024-07-15 18:41:52.135064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.814 [2024-07-15 18:41:52.135143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.814 [2024-07-15 18:41:52.135161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.814 [2024-07-15 18:41:52.138993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.814 [2024-07-15 18:41:52.139074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.814 [2024-07-15 18:41:52.139093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.814 [2024-07-15 18:41:52.142955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.814 [2024-07-15 18:41:52.143017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.814 [2024-07-15 18:41:52.143036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.815 [2024-07-15 18:41:52.146654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.815 [2024-07-15 18:41:52.146780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.815 [2024-07-15 18:41:52.146798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.815 [2024-07-15 18:41:52.150376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.815 [2024-07-15 18:41:52.150486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.815 [2024-07-15 18:41:52.150504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.815 [2024-07-15 18:41:52.154067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.815 [2024-07-15 18:41:52.154163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.815 [2024-07-15 18:41:52.154184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.815 [2024-07-15 18:41:52.157812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.815 [2024-07-15 18:41:52.157902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.815 [2024-07-15 18:41:52.157919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.815 [2024-07-15 18:41:52.161547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.815 [2024-07-15 18:41:52.161641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.815 [2024-07-15 18:41:52.161658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.815 [2024-07-15 18:41:52.165211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.815 [2024-07-15 18:41:52.165319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.815 [2024-07-15 18:41:52.165341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.815 [2024-07-15 18:41:52.168861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.815 [2024-07-15 18:41:52.168977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.815 [2024-07-15 18:41:52.168993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.815 [2024-07-15 18:41:52.172756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.815 [2024-07-15 18:41:52.172864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.815 [2024-07-15 18:41:52.172882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.815 [2024-07-15 18:41:52.176446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.815 [2024-07-15 18:41:52.176564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.815 [2024-07-15 18:41:52.176581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.815 [2024-07-15 18:41:52.180121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.815 [2024-07-15 18:41:52.180221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.815 [2024-07-15 18:41:52.180238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.815 [2024-07-15 18:41:52.183816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.815 [2024-07-15 18:41:52.183898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.815 [2024-07-15 18:41:52.183916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.815 [2024-07-15 18:41:52.187477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.815 [2024-07-15 18:41:52.187589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.815 [2024-07-15 18:41:52.187606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.815 [2024-07-15 18:41:52.191145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.815 [2024-07-15 18:41:52.191241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.815 [2024-07-15 18:41:52.191258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.815 [2024-07-15 18:41:52.194771] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.815 [2024-07-15 18:41:52.194849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.815 [2024-07-15 18:41:52.194866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.815 [2024-07-15 18:41:52.198414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.815 [2024-07-15 18:41:52.198509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.815 [2024-07-15 18:41:52.198526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.815 [2024-07-15 18:41:52.202065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.815 [2024-07-15 18:41:52.202178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.815 [2024-07-15 18:41:52.202196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.815 [2024-07-15 18:41:52.205719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.815 [2024-07-15 18:41:52.205829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.815 [2024-07-15 18:41:52.205846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.815 [2024-07-15 18:41:52.209456] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.815 [2024-07-15 18:41:52.209573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.815 [2024-07-15 18:41:52.209590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.815 [2024-07-15 18:41:52.213100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.815 [2024-07-15 18:41:52.213223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.815 [2024-07-15 18:41:52.213240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.815 [2024-07-15 18:41:52.216744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.815 [2024-07-15 18:41:52.216842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.815 [2024-07-15 18:41:52.216859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.815 [2024-07-15 18:41:52.220382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.815 [2024-07-15 18:41:52.220483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.815 [2024-07-15 18:41:52.220500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.815 [2024-07-15 18:41:52.224023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.815 [2024-07-15 18:41:52.224103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.815 [2024-07-15 18:41:52.224120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.815 [2024-07-15 18:41:52.227634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.815 [2024-07-15 18:41:52.227712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.815 [2024-07-15 18:41:52.227728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.815 [2024-07-15 18:41:52.231431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.815 [2024-07-15 18:41:52.231540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.815 [2024-07-15 18:41:52.231557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.815 [2024-07-15 18:41:52.235366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.815 [2024-07-15 18:41:52.235419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.815 [2024-07-15 18:41:52.235436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.815 [2024-07-15 18:41:52.239930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.815 [2024-07-15 18:41:52.240041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.815 [2024-07-15 18:41:52.240058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.815 [2024-07-15 18:41:52.245154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.815 [2024-07-15 18:41:52.245252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.815 [2024-07-15 18:41:52.245270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.815 [2024-07-15 18:41:52.249062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.815 [2024-07-15 18:41:52.249170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.815 [2024-07-15 18:41:52.249187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.815 [2024-07-15 18:41:52.253064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.815 [2024-07-15 18:41:52.253191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.816 [2024-07-15 18:41:52.253212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.816 [2024-07-15 18:41:52.256935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.816 [2024-07-15 18:41:52.257023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.816 [2024-07-15 18:41:52.257040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.816 [2024-07-15 18:41:52.260869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.816 [2024-07-15 18:41:52.260942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.816 [2024-07-15 18:41:52.260959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.816 [2024-07-15 18:41:52.264727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.816 [2024-07-15 18:41:52.264817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.816 [2024-07-15 18:41:52.264834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.816 [2024-07-15 18:41:52.268531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.816 [2024-07-15 18:41:52.268629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.816 [2024-07-15 18:41:52.268646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.816 [2024-07-15 18:41:52.272587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.816 [2024-07-15 18:41:52.272671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.816 [2024-07-15 18:41:52.272688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.816 [2024-07-15 18:41:52.276997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.816 [2024-07-15 18:41:52.277086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.816 [2024-07-15 18:41:52.277103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.816 [2024-07-15 18:41:52.281082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.816 [2024-07-15 18:41:52.281193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.816 [2024-07-15 18:41:52.281211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.816 [2024-07-15 18:41:52.285091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.816 [2024-07-15 18:41:52.285222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.816 [2024-07-15 18:41:52.285240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.816 [2024-07-15 18:41:52.289130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.816 [2024-07-15 18:41:52.289212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.816 [2024-07-15 18:41:52.289230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.816 [2024-07-15 18:41:52.293042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.816 [2024-07-15 18:41:52.293130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.816 [2024-07-15 18:41:52.293147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.816 [2024-07-15 18:41:52.297034] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.816 [2024-07-15 18:41:52.297148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.816 [2024-07-15 18:41:52.297165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.816 [2024-07-15 18:41:52.300959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.816 [2024-07-15 18:41:52.301061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.816 [2024-07-15 18:41:52.301078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.816 [2024-07-15 18:41:52.305020] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.816 [2024-07-15 18:41:52.305119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.816 [2024-07-15 18:41:52.305137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.816 [2024-07-15 18:41:52.308875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.816 [2024-07-15 18:41:52.308958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.816 [2024-07-15 18:41:52.308976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.816 [2024-07-15 18:41:52.312802] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.816 [2024-07-15 18:41:52.312922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.816 [2024-07-15 18:41:52.312939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.816 [2024-07-15 18:41:52.316711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.816 [2024-07-15 18:41:52.316790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.816 [2024-07-15 18:41:52.316807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.816 [2024-07-15 18:41:52.320591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.816 [2024-07-15 18:41:52.320658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.816 [2024-07-15 18:41:52.320675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.816 [2024-07-15 18:41:52.324585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.816 [2024-07-15 18:41:52.324693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.816 [2024-07-15 18:41:52.324710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.816 [2024-07-15 18:41:52.328471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.816 [2024-07-15 18:41:52.328571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.816 [2024-07-15 18:41:52.328587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.816 [2024-07-15 18:41:52.332374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.816 [2024-07-15 18:41:52.332443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.816 [2024-07-15 18:41:52.332460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.816 [2024-07-15 18:41:52.336189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.816 [2024-07-15 18:41:52.336279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.816 [2024-07-15 18:41:52.336296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.816 [2024-07-15 18:41:52.340383] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.816 [2024-07-15 18:41:52.340513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.816 [2024-07-15 18:41:52.340529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.816 [2024-07-15 18:41:52.344284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.816 [2024-07-15 18:41:52.344344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.816 [2024-07-15 18:41:52.344361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.816 [2024-07-15 18:41:52.349247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.816 [2024-07-15 18:41:52.349301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.816 [2024-07-15 18:41:52.349318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.816 [2024-07-15 18:41:52.353420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.816 [2024-07-15 18:41:52.353485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.816 [2024-07-15 18:41:52.353502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.816 [2024-07-15 18:41:52.357424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.816 [2024-07-15 18:41:52.357490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.816 [2024-07-15 18:41:52.357510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.816 [2024-07-15 18:41:52.361294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.816 [2024-07-15 18:41:52.361389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.816 [2024-07-15 18:41:52.361407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.816 [2024-07-15 18:41:52.365165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.816 [2024-07-15 18:41:52.365231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.816 [2024-07-15 18:41:52.365248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.816 [2024-07-15 18:41:52.369197] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:06.816 [2024-07-15 18:41:52.369282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.816 [2024-07-15 18:41:52.369302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.076 [2024-07-15 18:41:52.373056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.076 [2024-07-15 18:41:52.373142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.076 [2024-07-15 18:41:52.373161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.076 [2024-07-15 18:41:52.377025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.076 [2024-07-15 18:41:52.377099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.077 [2024-07-15 18:41:52.377119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.077 [2024-07-15 18:41:52.380670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.077 [2024-07-15 18:41:52.380766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.077 [2024-07-15 18:41:52.380785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.077 [2024-07-15 18:41:52.384330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.077 [2024-07-15 18:41:52.384459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.077 [2024-07-15 18:41:52.384477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.077 [2024-07-15 18:41:52.387989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.077 [2024-07-15 18:41:52.388107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.077 [2024-07-15 18:41:52.388125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.077 [2024-07-15 18:41:52.391687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.077 [2024-07-15 18:41:52.391772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.077 [2024-07-15 18:41:52.391789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.077 [2024-07-15 18:41:52.395344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.077 [2024-07-15 18:41:52.395427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.077 [2024-07-15 18:41:52.395445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.077 [2024-07-15 18:41:52.398986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.077 [2024-07-15 18:41:52.399075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.077 [2024-07-15 18:41:52.399092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.077 [2024-07-15 18:41:52.402631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.077 [2024-07-15 18:41:52.402707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.077 [2024-07-15 18:41:52.402724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.077 [2024-07-15 18:41:52.406225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.077 [2024-07-15 18:41:52.406324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.077 [2024-07-15 18:41:52.406346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.077 [2024-07-15 18:41:52.409832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.077 [2024-07-15 18:41:52.409926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.077 [2024-07-15 18:41:52.409943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.077 [2024-07-15 18:41:52.413437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.077 [2024-07-15 18:41:52.413549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.077 [2024-07-15 18:41:52.413566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.077 [2024-07-15 18:41:52.417069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.077 [2024-07-15 18:41:52.417192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.077 [2024-07-15 18:41:52.417209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.077 [2024-07-15 18:41:52.420722] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.077 [2024-07-15 18:41:52.420829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.077 [2024-07-15 18:41:52.420847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.077 [2024-07-15 18:41:52.424375] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.077 [2024-07-15 18:41:52.424459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.077 [2024-07-15 18:41:52.424476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.077 [2024-07-15 18:41:52.427970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.077 [2024-07-15 18:41:52.428098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.077 [2024-07-15 18:41:52.428115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.077 [2024-07-15 18:41:52.431567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.077 [2024-07-15 18:41:52.431675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.077 [2024-07-15 18:41:52.431692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.077 [2024-07-15 18:41:52.435154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.077 [2024-07-15 18:41:52.435296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.077 [2024-07-15 18:41:52.435312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.077 [2024-07-15 18:41:52.438779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.077 [2024-07-15 18:41:52.438905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.077 [2024-07-15 18:41:52.438922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.077 [2024-07-15 18:41:52.442412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.077 [2024-07-15 18:41:52.442516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.077 [2024-07-15 18:41:52.442533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.077 [2024-07-15 18:41:52.446188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.077 [2024-07-15 18:41:52.446268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.077 [2024-07-15 18:41:52.446285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.077 [2024-07-15 18:41:52.450754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.077 [2024-07-15 18:41:52.450864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.077 [2024-07-15 18:41:52.450881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.077 [2024-07-15 18:41:52.455957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.077 [2024-07-15 18:41:52.456036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.077 [2024-07-15 18:41:52.456056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.077 [2024-07-15 18:41:52.460551] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.077 [2024-07-15 18:41:52.460606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.078 [2024-07-15 18:41:52.460623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.078 [2024-07-15 18:41:52.465104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.078 [2024-07-15 18:41:52.465218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.078 [2024-07-15 18:41:52.465237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.078 [2024-07-15 18:41:52.469665] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.078 [2024-07-15 18:41:52.469730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.078 [2024-07-15 18:41:52.469747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.078 [2024-07-15 18:41:52.474671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.078 [2024-07-15 18:41:52.474754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.078 [2024-07-15 18:41:52.474771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.078 [2024-07-15 18:41:52.479019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.078 [2024-07-15 18:41:52.479092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.078 [2024-07-15 18:41:52.479110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.078 [2024-07-15 18:41:52.483538] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.078 [2024-07-15 18:41:52.483657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.078 [2024-07-15 18:41:52.483674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.078 [2024-07-15 18:41:52.488929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.078 [2024-07-15 18:41:52.488983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.078 [2024-07-15 18:41:52.488999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.078 [2024-07-15 18:41:52.493950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.078 [2024-07-15 18:41:52.494010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.078 [2024-07-15 18:41:52.494028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.078 [2024-07-15 18:41:52.498658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.078 [2024-07-15 18:41:52.498780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.078 [2024-07-15 18:41:52.498797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.078 [2024-07-15 18:41:52.503980] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.078 [2024-07-15 18:41:52.504164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.078 [2024-07-15 18:41:52.504180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.078 [2024-07-15 18:41:52.509775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.078 [2024-07-15 18:41:52.509851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.078 [2024-07-15 18:41:52.509868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.078 [2024-07-15 18:41:52.515354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.078 [2024-07-15 18:41:52.515590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.078 [2024-07-15 18:41:52.515608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.078 [2024-07-15 18:41:52.522043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.078 [2024-07-15 18:41:52.522221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.078 [2024-07-15 18:41:52.522238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.078 [2024-07-15 18:41:52.528207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.078 [2024-07-15 18:41:52.528327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.078 [2024-07-15 18:41:52.528348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.078 [2024-07-15 18:41:52.534800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.078 [2024-07-15 18:41:52.534947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.078 [2024-07-15 18:41:52.534964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.078 [2024-07-15 18:41:52.540730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.078 [2024-07-15 18:41:52.540845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.078 [2024-07-15 18:41:52.540862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.078 [2024-07-15 18:41:52.546944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.078 [2024-07-15 18:41:52.547084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.078 [2024-07-15 18:41:52.547102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.078 [2024-07-15 18:41:52.553310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.078 [2024-07-15 18:41:52.553451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.078 [2024-07-15 18:41:52.553469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.078 [2024-07-15 18:41:52.559588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.078 [2024-07-15 18:41:52.559756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.078 [2024-07-15 18:41:52.559774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.078 [2024-07-15 18:41:52.565523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.078 [2024-07-15 18:41:52.565685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.078 [2024-07-15 18:41:52.565702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.078 [2024-07-15 18:41:52.571278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.078 [2024-07-15 18:41:52.571515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.078 [2024-07-15 18:41:52.571534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.078 [2024-07-15 18:41:52.577808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.078 [2024-07-15 18:41:52.577928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.078 [2024-07-15 18:41:52.577945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.078 [2024-07-15 18:41:52.584092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.078 [2024-07-15 18:41:52.584216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.078 [2024-07-15 18:41:52.584233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.079 [2024-07-15 18:41:52.590358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.079 [2024-07-15 18:41:52.590480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.079 [2024-07-15 18:41:52.590498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.079 [2024-07-15 18:41:52.596585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.079 [2024-07-15 18:41:52.596759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.079 [2024-07-15 18:41:52.596777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.079 [2024-07-15 18:41:52.602647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.079 [2024-07-15 18:41:52.602736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.079 [2024-07-15 18:41:52.602758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.079 [2024-07-15 18:41:52.608826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.079 [2024-07-15 18:41:52.609098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.079 [2024-07-15 18:41:52.609117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.079 [2024-07-15 18:41:52.615145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.079 [2024-07-15 18:41:52.615359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.079 [2024-07-15 18:41:52.615378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.079 [2024-07-15 18:41:52.621161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.079 [2024-07-15 18:41:52.621256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.079 [2024-07-15 18:41:52.621273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.079 [2024-07-15 18:41:52.625086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.079 [2024-07-15 18:41:52.625165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.079 [2024-07-15 18:41:52.625182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.079 [2024-07-15 18:41:52.628894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.079 [2024-07-15 18:41:52.628956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.079 [2024-07-15 18:41:52.628975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.339 [2024-07-15 18:41:52.632729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.339 [2024-07-15 18:41:52.632790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.339 [2024-07-15 18:41:52.632809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.339 [2024-07-15 18:41:52.636478] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.339 [2024-07-15 18:41:52.636535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.339 [2024-07-15 18:41:52.636554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.339 [2024-07-15 18:41:52.640264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.339 [2024-07-15 18:41:52.640335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.339 [2024-07-15 18:41:52.640360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.339 [2024-07-15 18:41:52.644053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.339 [2024-07-15 18:41:52.644109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.339 [2024-07-15 18:41:52.644129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.339 [2024-07-15 18:41:52.647968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.339 [2024-07-15 18:41:52.648033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.339 [2024-07-15 18:41:52.648051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.339 [2024-07-15 18:41:52.652365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.339 [2024-07-15 18:41:52.652421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.339 [2024-07-15 18:41:52.652439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.339 [2024-07-15 18:41:52.656632] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.339 [2024-07-15 18:41:52.656747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.339 [2024-07-15 18:41:52.656764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.339 [2024-07-15 18:41:52.660637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.339 [2024-07-15 18:41:52.660691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.339 [2024-07-15 18:41:52.660708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.339 [2024-07-15 18:41:52.664587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.339 [2024-07-15 18:41:52.664641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.339 [2024-07-15 18:41:52.664658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.339 [2024-07-15 18:41:52.668361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.339 [2024-07-15 18:41:52.668417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.339 [2024-07-15 18:41:52.668434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.339 [2024-07-15 18:41:52.672329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.339 [2024-07-15 18:41:52.672393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.339 [2024-07-15 18:41:52.672410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.339 [2024-07-15 18:41:52.676870] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.339 [2024-07-15 18:41:52.676937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.339 [2024-07-15 18:41:52.676954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.339 [2024-07-15 18:41:52.681444] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.339 [2024-07-15 18:41:52.681579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.339 [2024-07-15 18:41:52.681598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.339 [2024-07-15 18:41:52.685435] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.339 [2024-07-15 18:41:52.685491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.340 [2024-07-15 18:41:52.685508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.340 [2024-07-15 18:41:52.689355] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.340 [2024-07-15 18:41:52.689445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.340 [2024-07-15 18:41:52.689462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.340 [2024-07-15 18:41:52.693396] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.340 [2024-07-15 18:41:52.693451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.340 [2024-07-15 18:41:52.693468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.340 [2024-07-15 18:41:52.697392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.340 [2024-07-15 18:41:52.697468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.340 [2024-07-15 18:41:52.697485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.340 [2024-07-15 18:41:52.701266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.340 [2024-07-15 18:41:52.701322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.340 [2024-07-15 18:41:52.701345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.340 [2024-07-15 18:41:52.705197] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.340 [2024-07-15 18:41:52.705297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.340 [2024-07-15 18:41:52.705313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.340 [2024-07-15 18:41:52.709160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.340 [2024-07-15 18:41:52.709214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.340 [2024-07-15 18:41:52.709230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.340 [2024-07-15 18:41:52.713069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.340 [2024-07-15 18:41:52.713135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.340 [2024-07-15 18:41:52.713152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.340 [2024-07-15 18:41:52.717005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.340 [2024-07-15 18:41:52.717074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.340 [2024-07-15 18:41:52.717092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.340 [2024-07-15 18:41:52.721009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.340 [2024-07-15 18:41:52.721090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.340 [2024-07-15 18:41:52.721108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.340 [2024-07-15 18:41:52.724891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.340 [2024-07-15 18:41:52.724966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.340 [2024-07-15 18:41:52.724983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.340 [2024-07-15 18:41:52.728798] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.340 [2024-07-15 18:41:52.728863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.340 [2024-07-15 18:41:52.728881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.340 [2024-07-15 18:41:52.732586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.340 [2024-07-15 18:41:52.732711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.340 [2024-07-15 18:41:52.732728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.340 [2024-07-15 18:41:52.736659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.340 [2024-07-15 18:41:52.736730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.340 [2024-07-15 18:41:52.736748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.340 [2024-07-15 18:41:52.741503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.340 [2024-07-15 18:41:52.741610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.340 [2024-07-15 18:41:52.741628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.340 [2024-07-15 18:41:52.745890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.340 [2024-07-15 18:41:52.745995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.340 [2024-07-15 18:41:52.746028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.340 [2024-07-15 18:41:52.749876] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.340 [2024-07-15 18:41:52.749960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.340 [2024-07-15 18:41:52.749980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.340 [2024-07-15 18:41:52.754138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.340 [2024-07-15 18:41:52.754206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.340 [2024-07-15 18:41:52.754223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.340 [2024-07-15 18:41:52.758266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.340 [2024-07-15 18:41:52.758334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.340 [2024-07-15 18:41:52.758358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.340 [2024-07-15 18:41:52.762024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.340 [2024-07-15 18:41:52.762091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.340 [2024-07-15 18:41:52.762109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.340 [2024-07-15 18:41:52.765810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.340 [2024-07-15 18:41:52.765882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.340 [2024-07-15 18:41:52.765901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.340 [2024-07-15 18:41:52.769571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.340 [2024-07-15 18:41:52.769651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.340 [2024-07-15 18:41:52.769669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.340 [2024-07-15 18:41:52.773299] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.340 [2024-07-15 18:41:52.773393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.340 [2024-07-15 18:41:52.773411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.340 [2024-07-15 18:41:52.777075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.341 [2024-07-15 18:41:52.777132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.341 [2024-07-15 18:41:52.777150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.341 [2024-07-15 18:41:52.780859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.341 [2024-07-15 18:41:52.780934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.341 [2024-07-15 18:41:52.780951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.341 [2024-07-15 18:41:52.784921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.341 [2024-07-15 18:41:52.785007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.341 [2024-07-15 18:41:52.785024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.341 [2024-07-15 18:41:52.788916] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.341 [2024-07-15 18:41:52.789033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.341 [2024-07-15 18:41:52.789050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.341 [2024-07-15 18:41:52.792877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.341 [2024-07-15 18:41:52.792965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.341 [2024-07-15 18:41:52.792983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.341 [2024-07-15 18:41:52.796892] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.341 [2024-07-15 18:41:52.797003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.341 [2024-07-15 18:41:52.797021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.341 [2024-07-15 18:41:52.800949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.341 [2024-07-15 18:41:52.801020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.341 [2024-07-15 18:41:52.801038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.341 [2024-07-15 18:41:52.804920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.341 [2024-07-15 18:41:52.805021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.341 [2024-07-15 18:41:52.805038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.341 [2024-07-15 18:41:52.808962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.341 [2024-07-15 18:41:52.809052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.341 [2024-07-15 18:41:52.809069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.341 [2024-07-15 18:41:52.812965] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.341 [2024-07-15 18:41:52.813053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.341 [2024-07-15 18:41:52.813070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.341 [2024-07-15 18:41:52.817313] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.341 [2024-07-15 18:41:52.817454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.341 [2024-07-15 18:41:52.817471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.341 [2024-07-15 18:41:52.821729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.341 [2024-07-15 18:41:52.821833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.341 [2024-07-15 18:41:52.821851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.341 [2024-07-15 18:41:52.826579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.341 [2024-07-15 18:41:52.826678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.341 [2024-07-15 18:41:52.826696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.341 [2024-07-15 18:41:52.831818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.341 [2024-07-15 18:41:52.831930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.341 [2024-07-15 18:41:52.831948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.341 [2024-07-15 18:41:52.836837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.341 [2024-07-15 18:41:52.836905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.341 [2024-07-15 18:41:52.836922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.341 [2024-07-15 18:41:52.840795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.341 [2024-07-15 18:41:52.840883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.341 [2024-07-15 18:41:52.840900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.341 [2024-07-15 18:41:52.844797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.341 [2024-07-15 18:41:52.844881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.341 [2024-07-15 18:41:52.844898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.341 [2024-07-15 18:41:52.848805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.341 [2024-07-15 18:41:52.848856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.341 [2024-07-15 18:41:52.848873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.341 [2024-07-15 18:41:52.852920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.341 [2024-07-15 18:41:52.852987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.341 [2024-07-15 18:41:52.853004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.341 [2024-07-15 18:41:52.856868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.341 [2024-07-15 18:41:52.856963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.341 [2024-07-15 18:41:52.856984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.341 [2024-07-15 18:41:52.860787] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.341 [2024-07-15 18:41:52.860876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.341 [2024-07-15 18:41:52.860893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.341 [2024-07-15 18:41:52.864688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.341 [2024-07-15 18:41:52.864746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.341 [2024-07-15 18:41:52.864763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.341 [2024-07-15 18:41:52.869246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.341 [2024-07-15 18:41:52.869365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.341 [2024-07-15 18:41:52.869383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.342 [2024-07-15 18:41:52.873832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.342 [2024-07-15 18:41:52.873900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.342 [2024-07-15 18:41:52.873917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.342 [2024-07-15 18:41:52.878120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.342 [2024-07-15 18:41:52.878185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.342 [2024-07-15 18:41:52.878202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.342 [2024-07-15 18:41:52.882207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.342 [2024-07-15 18:41:52.882296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.342 [2024-07-15 18:41:52.882313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.342 [2024-07-15 18:41:52.886079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.342 [2024-07-15 18:41:52.886133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.342 [2024-07-15 18:41:52.886150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.342 [2024-07-15 18:41:52.889898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.342 [2024-07-15 18:41:52.889958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.342 [2024-07-15 18:41:52.889976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.342 [2024-07-15 18:41:52.893641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.342 [2024-07-15 18:41:52.893707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.342 [2024-07-15 18:41:52.893727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.601 [2024-07-15 18:41:52.897404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.601 [2024-07-15 18:41:52.897479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.601 [2024-07-15 18:41:52.897498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.601 [2024-07-15 18:41:52.901122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.601 [2024-07-15 18:41:52.901183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.601 [2024-07-15 18:41:52.901203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.601 [2024-07-15 18:41:52.904875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.601 [2024-07-15 18:41:52.904946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.601 [2024-07-15 18:41:52.904965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.601 [2024-07-15 18:41:52.908722] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.601 [2024-07-15 18:41:52.908800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.601 [2024-07-15 18:41:52.908818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.601 [2024-07-15 18:41:52.912679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.601 [2024-07-15 18:41:52.912752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.601 [2024-07-15 18:41:52.912770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.601 [2024-07-15 18:41:52.916554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.601 [2024-07-15 18:41:52.916622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.601 [2024-07-15 18:41:52.916640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.601 [2024-07-15 18:41:52.920545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.602 [2024-07-15 18:41:52.920613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.602 [2024-07-15 18:41:52.920631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.602 [2024-07-15 18:41:52.924512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.602 [2024-07-15 18:41:52.924598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.602 [2024-07-15 18:41:52.924616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.602 [2024-07-15 18:41:52.928448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.602 [2024-07-15 18:41:52.928508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.602 [2024-07-15 18:41:52.928525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.602 [2024-07-15 18:41:52.932271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.602 [2024-07-15 18:41:52.932328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.602 [2024-07-15 18:41:52.932353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.602 [2024-07-15 18:41:52.936183] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.602 [2024-07-15 18:41:52.936240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.602 [2024-07-15 18:41:52.936258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.602 [2024-07-15 18:41:52.940167] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.602 [2024-07-15 18:41:52.940242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.602 [2024-07-15 18:41:52.940259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.602 [2024-07-15 18:41:52.944054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.602 [2024-07-15 18:41:52.944148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.602 [2024-07-15 18:41:52.944165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.602 [2024-07-15 18:41:52.947878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.602 [2024-07-15 18:41:52.947939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.602 [2024-07-15 18:41:52.947956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.602 [2024-07-15 18:41:52.951809] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.602 [2024-07-15 18:41:52.951862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.602 [2024-07-15 18:41:52.951878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.602 [2024-07-15 18:41:52.955650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.602 [2024-07-15 18:41:52.955713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.602 [2024-07-15 18:41:52.955730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.602 [2024-07-15 18:41:52.959450] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.602 [2024-07-15 18:41:52.959505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.602 [2024-07-15 18:41:52.959526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.602 [2024-07-15 18:41:52.964044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.602 [2024-07-15 18:41:52.964099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.602 [2024-07-15 18:41:52.964116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.602 [2024-07-15 18:41:52.968529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.602 [2024-07-15 18:41:52.968583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.602 [2024-07-15 18:41:52.968601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.602 [2024-07-15 18:41:52.972509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.602 [2024-07-15 18:41:52.972579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.602 [2024-07-15 18:41:52.972596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.602 [2024-07-15 18:41:52.976438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.602 [2024-07-15 18:41:52.976535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.602 [2024-07-15 18:41:52.976551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.602 [2024-07-15 18:41:52.980328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.602 [2024-07-15 18:41:52.980389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.602 [2024-07-15 18:41:52.980405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.602 [2024-07-15 18:41:52.984174] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.602 [2024-07-15 18:41:52.984233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.602 [2024-07-15 18:41:52.984251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.602 [2024-07-15 18:41:52.988086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.602 [2024-07-15 18:41:52.988136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.602 [2024-07-15 18:41:52.988153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.602 [2024-07-15 18:41:52.991972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.602 [2024-07-15 18:41:52.992060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.602 [2024-07-15 18:41:52.992077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.602 [2024-07-15 18:41:52.995877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.602 [2024-07-15 18:41:52.995953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.602 [2024-07-15 18:41:52.995971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.602 [2024-07-15 18:41:52.999986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.602 [2024-07-15 18:41:53.000053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.602 [2024-07-15 18:41:53.000070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.602 [2024-07-15 18:41:53.004039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.602 [2024-07-15 18:41:53.004113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.602 [2024-07-15 18:41:53.004131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.602 [2024-07-15 18:41:53.007798] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.602 [2024-07-15 18:41:53.007866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.602 [2024-07-15 18:41:53.007883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.602 [2024-07-15 18:41:53.011991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.602 [2024-07-15 18:41:53.012075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.602 [2024-07-15 18:41:53.012092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.602 [2024-07-15 18:41:53.016649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.602 [2024-07-15 18:41:53.016733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.602 [2024-07-15 18:41:53.016750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.602 [2024-07-15 18:41:53.020606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd11810) with pdu=0x2000190fef90 00:27:07.602 [2024-07-15 18:41:53.020679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.602 [2024-07-15 18:41:53.020698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.602 00:27:07.602 Latency(us) 00:27:07.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:07.602 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:07.602 nvme0n1 : 2.00 7126.59 890.82 0.00 0.00 2241.41 1685.21 9362.29 00:27:07.602 =================================================================================================================== 00:27:07.602 Total : 7126.59 890.82 0.00 0.00 2241.41 1685.21 9362.29 00:27:07.602 0 00:27:07.602 18:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:07.602 18:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:07.602 18:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:07.602 | .driver_specific 00:27:07.602 | .nvme_error 00:27:07.602 | .status_code 00:27:07.602 | .command_transient_transport_error' 00:27:07.602 18:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:07.861 18:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 460 > 0 )) 00:27:07.861 18:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4064968 00:27:07.861 18:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 4064968 ']' 00:27:07.861 18:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 4064968 00:27:07.861 18:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:07.861 18:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:07.861 18:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4064968 00:27:07.861 18:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:07.861 18:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:07.861 18:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4064968' 00:27:07.861 killing process with pid 4064968 00:27:07.861 18:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 4064968 00:27:07.861 Received shutdown signal, test time was about 2.000000 seconds 00:27:07.861 00:27:07.861 Latency(us) 00:27:07.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:07.861 =================================================================================================================== 00:27:07.861 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:07.861 18:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 4064968 00:27:08.120 18:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 4062891 00:27:08.120 18:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 4062891 ']' 00:27:08.120 18:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 4062891 00:27:08.120 18:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:08.120 18:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:08.120 18:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4062891 00:27:08.120 18:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:08.120 18:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:08.120 18:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4062891' 00:27:08.120 killing process with pid 4062891 00:27:08.120 18:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 4062891 00:27:08.120 18:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 4062891 00:27:08.120 00:27:08.120 real 0m16.631s 00:27:08.120 user 0m31.590s 00:27:08.120 sys 0m4.768s 00:27:08.120 18:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:08.120 18:41:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:08.120 ************************************ 00:27:08.120 END TEST nvmf_digest_error 00:27:08.120 ************************************ 00:27:08.378 18:41:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:27:08.378 18:41:53 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:08.378 18:41:53 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:08.378 18:41:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:08.378 18:41:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:27:08.378 18:41:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:08.378 18:41:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:27:08.378 18:41:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:08.378 18:41:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:08.378 rmmod nvme_tcp 00:27:08.378 rmmod nvme_fabrics 00:27:08.378 rmmod nvme_keyring 00:27:08.378 18:41:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:08.378 18:41:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:27:08.379 18:41:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:27:08.379 18:41:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 4062891 ']' 00:27:08.379 18:41:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 4062891 00:27:08.379 18:41:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 4062891 ']' 00:27:08.379 18:41:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 4062891 00:27:08.379 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (4062891) - No such process 00:27:08.379 18:41:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 4062891 is not found' 00:27:08.379 Process with pid 4062891 is not found 00:27:08.379 18:41:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:08.379 18:41:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:08.379 18:41:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:08.379 18:41:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:08.379 18:41:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:08.379 18:41:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.379 18:41:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:08.379 18:41:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.910 18:41:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:10.910 00:27:10.910 real 0m41.829s 00:27:10.910 user 1m5.602s 00:27:10.910 sys 0m14.048s 00:27:10.910 18:41:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:10.910 18:41:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:10.910 ************************************ 00:27:10.910 END TEST nvmf_digest 00:27:10.910 ************************************ 00:27:10.910 18:41:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:10.910 18:41:55 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:27:10.910 18:41:55 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:27:10.910 18:41:55 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:27:10.910 18:41:55 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:10.910 18:41:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:10.910 18:41:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:10.910 18:41:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:10.910 ************************************ 00:27:10.910 START TEST nvmf_bdevperf 00:27:10.910 ************************************ 00:27:10.910 18:41:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:10.910 * Looking for test storage... 00:27:10.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:27:10.910 18:41:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:16.180 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:16.180 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:16.180 Found net devices under 0000:86:00.0: cvl_0_0 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:16.180 Found net devices under 0000:86:00.1: cvl_0_1 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:16.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:16.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:27:16.180 00:27:16.180 --- 10.0.0.2 ping statistics --- 00:27:16.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.180 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:16.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:16.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:27:16.180 00:27:16.180 --- 10.0.0.1 ping statistics --- 00:27:16.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.180 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:16.180 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:16.440 18:42:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:16.440 18:42:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:16.440 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:16.440 18:42:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:16.440 18:42:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.440 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=4069194 00:27:16.440 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 4069194 00:27:16.440 18:42:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:16.440 18:42:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 4069194 ']' 00:27:16.440 18:42:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.440 18:42:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:16.440 18:42:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.440 18:42:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:16.440 18:42:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.440 [2024-07-15 18:42:01.805126] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:27:16.440 [2024-07-15 18:42:01.805169] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:16.440 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.440 [2024-07-15 18:42:01.873702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:16.440 [2024-07-15 18:42:01.950674] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:16.440 [2024-07-15 18:42:01.950715] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:16.440 [2024-07-15 18:42:01.950723] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:16.440 [2024-07-15 18:42:01.950729] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:16.440 [2024-07-15 18:42:01.950734] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:16.440 [2024-07-15 18:42:01.950862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:16.440 [2024-07-15 18:42:01.950974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.440 [2024-07-15 18:42:01.950975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:17.375 18:42:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:17.375 18:42:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:27:17.375 18:42:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:17.375 18:42:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:17.375 18:42:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:17.375 18:42:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:17.375 18:42:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:17.376 18:42:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.376 18:42:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:17.376 [2024-07-15 18:42:02.650720] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:17.376 18:42:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.376 18:42:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:17.376 18:42:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.376 18:42:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:17.376 Malloc0 00:27:17.376 18:42:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.376 18:42:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:17.376 18:42:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.376 18:42:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:17.376 18:42:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.376 18:42:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:17.376 18:42:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.376 18:42:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:17.376 18:42:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.376 18:42:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:17.376 18:42:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.376 18:42:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:17.376 [2024-07-15 18:42:02.711167] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:17.376 18:42:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.376 18:42:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:17.376 18:42:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:17.376 18:42:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:27:17.376 18:42:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:27:17.376 18:42:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:17.376 18:42:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:17.376 { 00:27:17.376 "params": { 00:27:17.376 "name": "Nvme$subsystem", 00:27:17.376 "trtype": "$TEST_TRANSPORT", 00:27:17.376 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.376 "adrfam": "ipv4", 00:27:17.376 "trsvcid": "$NVMF_PORT", 00:27:17.376 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.376 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.376 "hdgst": ${hdgst:-false}, 00:27:17.376 "ddgst": ${ddgst:-false} 00:27:17.376 }, 00:27:17.376 "method": "bdev_nvme_attach_controller" 00:27:17.376 } 00:27:17.376 EOF 00:27:17.376 )") 00:27:17.376 18:42:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:27:17.376 18:42:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:27:17.376 18:42:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:27:17.376 18:42:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:17.376 "params": { 00:27:17.376 "name": "Nvme1", 00:27:17.376 "trtype": "tcp", 00:27:17.376 "traddr": "10.0.0.2", 00:27:17.376 "adrfam": "ipv4", 00:27:17.376 "trsvcid": "4420", 00:27:17.376 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:17.376 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:17.376 "hdgst": false, 00:27:17.376 "ddgst": false 00:27:17.376 }, 00:27:17.376 "method": "bdev_nvme_attach_controller" 00:27:17.376 }' 00:27:17.376 [2024-07-15 18:42:02.760556] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:27:17.376 [2024-07-15 18:42:02.760600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4069343 ] 00:27:17.376 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.376 [2024-07-15 18:42:02.827327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.376 [2024-07-15 18:42:02.900376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.634 Running I/O for 1 seconds... 00:27:19.023 00:27:19.023 Latency(us) 00:27:19.023 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:19.023 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:19.023 Verification LBA range: start 0x0 length 0x4000 00:27:19.023 Nvme1n1 : 1.00 11740.52 45.86 0.00 0.00 10859.91 912.82 11983.73 00:27:19.023 =================================================================================================================== 00:27:19.023 Total : 11740.52 45.86 0.00 0.00 10859.91 912.82 11983.73 00:27:19.023 18:42:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=4069585 00:27:19.023 18:42:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:19.023 18:42:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:19.023 18:42:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:19.023 18:42:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:27:19.023 18:42:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:27:19.023 18:42:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:19.023 18:42:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:19.023 { 00:27:19.024 "params": { 00:27:19.024 "name": "Nvme$subsystem", 00:27:19.024 "trtype": "$TEST_TRANSPORT", 00:27:19.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.024 "adrfam": "ipv4", 00:27:19.024 "trsvcid": "$NVMF_PORT", 00:27:19.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.024 "hdgst": ${hdgst:-false}, 00:27:19.024 "ddgst": ${ddgst:-false} 00:27:19.024 }, 00:27:19.024 "method": "bdev_nvme_attach_controller" 00:27:19.024 } 00:27:19.024 EOF 00:27:19.024 )") 00:27:19.024 18:42:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:27:19.024 18:42:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:27:19.024 18:42:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:27:19.024 18:42:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:19.024 "params": { 00:27:19.024 "name": "Nvme1", 00:27:19.024 "trtype": "tcp", 00:27:19.024 "traddr": "10.0.0.2", 00:27:19.024 "adrfam": "ipv4", 00:27:19.024 "trsvcid": "4420", 00:27:19.024 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:19.024 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:19.024 "hdgst": false, 00:27:19.024 "ddgst": false 00:27:19.024 }, 00:27:19.024 "method": "bdev_nvme_attach_controller" 00:27:19.024 }' 00:27:19.024 [2024-07-15 18:42:04.416645] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:27:19.024 [2024-07-15 18:42:04.416691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4069585 ] 00:27:19.024 EAL: No free 2048 kB hugepages reported on node 1 00:27:19.024 [2024-07-15 18:42:04.484384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.024 [2024-07-15 18:42:04.553154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.590 Running I/O for 15 seconds... 00:27:22.125 18:42:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 4069194 00:27:22.125 18:42:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:22.125 [2024-07-15 18:42:07.387175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:103544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.125 [2024-07-15 18:42:07.387215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.125 [2024-07-15 18:42:07.387233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:103552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.125 [2024-07-15 18:42:07.387241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.125 [2024-07-15 18:42:07.387250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.125 [2024-07-15 18:42:07.387258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.125 [2024-07-15 18:42:07.387266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:103568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.125 [2024-07-15 18:42:07.387272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.125 [2024-07-15 18:42:07.387281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:103576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.125 [2024-07-15 18:42:07.387288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.125 [2024-07-15 18:42:07.387296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:103584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.125 [2024-07-15 18:42:07.387303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.125 [2024-07-15 18:42:07.387312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:103592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.125 [2024-07-15 18:42:07.387321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.125 [2024-07-15 18:42:07.387329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:103600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.125 [2024-07-15 18:42:07.387344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.125 [2024-07-15 18:42:07.387352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.125 [2024-07-15 18:42:07.387359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.125 [2024-07-15 18:42:07.387367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:103616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.125 [2024-07-15 18:42:07.387373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.125 [2024-07-15 18:42:07.387382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:103624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.125 [2024-07-15 18:42:07.387390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.125 [2024-07-15 18:42:07.387399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.125 [2024-07-15 18:42:07.387406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.125 [2024-07-15 18:42:07.387414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:103640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.125 [2024-07-15 18:42:07.387421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.125 [2024-07-15 18:42:07.387430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:103648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.125 [2024-07-15 18:42:07.387436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.125 [2024-07-15 18:42:07.387444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:103656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.125 [2024-07-15 18:42:07.387450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.125 [2024-07-15 18:42:07.387458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.125 [2024-07-15 18:42:07.387466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.125 [2024-07-15 18:42:07.387474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.125 [2024-07-15 18:42:07.387482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.125 [2024-07-15 18:42:07.387493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.125 [2024-07-15 18:42:07.387500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.125 [2024-07-15 18:42:07.387509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.125 [2024-07-15 18:42:07.387518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.125 [2024-07-15 18:42:07.387526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.125 [2024-07-15 18:42:07.387534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.125 [2024-07-15 18:42:07.387544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.125 [2024-07-15 18:42:07.387552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.125 [2024-07-15 18:42:07.387561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.125 [2024-07-15 18:42:07.387570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.125 [2024-07-15 18:42:07.387579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.387587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.387596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:103728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.387605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.387613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:103736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.387619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.387627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.387635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.387642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.387648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.387656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.387662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.387670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.387677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.387684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.387691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.387698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.387704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.387712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.387718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.387726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.387734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.387742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.387748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.387756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.387762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.387770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.387776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.387783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.387790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.387797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.387804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.387811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.387817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.387825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.387831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.387839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.387846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.387853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.387859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.387867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.387873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.387881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.387888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.387895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:103896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.387901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.387909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.387917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.387924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.387930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.387938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.387944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.387951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:103928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.387958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.387967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.387973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.387981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.387987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.387995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.388001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.388008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.388014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.388022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.388029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.388036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.388042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.388050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.388057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.388064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.388070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.388078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.388085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.388094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.388100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.388108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.388114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.388122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.388128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.388136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.126 [2024-07-15 18:42:07.388142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.388150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.126 [2024-07-15 18:42:07.388156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.388164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:103048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.126 [2024-07-15 18:42:07.388170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.388178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:103056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.126 [2024-07-15 18:42:07.388184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.126 [2024-07-15 18:42:07.388193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.126 [2024-07-15 18:42:07.388199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:103088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:103112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:103128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:103136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:103232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:103280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.127 [2024-07-15 18:42:07.388751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.127 [2024-07-15 18:42:07.388766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:103320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:103328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:103344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:103360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:103368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:103376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:103384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:103400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:103408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.388984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.388991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.127 [2024-07-15 18:42:07.388997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.389005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.389012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.389021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.389027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.389035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.389041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.389048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.389054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.389062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:103448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.389069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.389076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.389082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.389090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.389096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.389104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.389110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.389118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.389125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.389133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.389139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.389147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.389153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.389160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.389167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.389175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.389181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.389189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.389196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.389204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.127 [2024-07-15 18:42:07.389210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.389217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23acc70 is same with the state(5) to be set 00:27:22.127 [2024-07-15 18:42:07.389225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:22.127 [2024-07-15 18:42:07.389230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:22.127 [2024-07-15 18:42:07.389235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103536 len:8 PRP1 0x0 PRP2 0x0 00:27:22.127 [2024-07-15 18:42:07.389242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.389286] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23acc70 was disconnected and freed. reset controller. 00:27:22.127 [2024-07-15 18:42:07.389327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.127 [2024-07-15 18:42:07.389342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.389350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.127 [2024-07-15 18:42:07.389356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.389363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.127 [2024-07-15 18:42:07.389370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.389376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.127 [2024-07-15 18:42:07.389384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.127 [2024-07-15 18:42:07.389391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.127 [2024-07-15 18:42:07.392126] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.127 [2024-07-15 18:42:07.392149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.127 [2024-07-15 18:42:07.392663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.127 [2024-07-15 18:42:07.392679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.127 [2024-07-15 18:42:07.392686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.127 [2024-07-15 18:42:07.392859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.127 [2024-07-15 18:42:07.393031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.127 [2024-07-15 18:42:07.393038] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.127 [2024-07-15 18:42:07.393045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.127 [2024-07-15 18:42:07.395807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.127 [2024-07-15 18:42:07.405374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.127 [2024-07-15 18:42:07.405680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.127 [2024-07-15 18:42:07.405697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.127 [2024-07-15 18:42:07.405704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.127 [2024-07-15 18:42:07.405876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.127 [2024-07-15 18:42:07.406048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.127 [2024-07-15 18:42:07.406055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.127 [2024-07-15 18:42:07.406061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.127 [2024-07-15 18:42:07.408815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.127 [2024-07-15 18:42:07.418399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.127 [2024-07-15 18:42:07.418732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.127 [2024-07-15 18:42:07.418748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.127 [2024-07-15 18:42:07.418755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.127 [2024-07-15 18:42:07.418926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.127 [2024-07-15 18:42:07.419098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.127 [2024-07-15 18:42:07.419108] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.127 [2024-07-15 18:42:07.419116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.127 [2024-07-15 18:42:07.421879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.127 [2024-07-15 18:42:07.431460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.127 [2024-07-15 18:42:07.431822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.127 [2024-07-15 18:42:07.431837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.128 [2024-07-15 18:42:07.431844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.128 [2024-07-15 18:42:07.432016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.128 [2024-07-15 18:42:07.432188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.128 [2024-07-15 18:42:07.432196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.128 [2024-07-15 18:42:07.432202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.128 [2024-07-15 18:42:07.435002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.128 [2024-07-15 18:42:07.444606] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.128 [2024-07-15 18:42:07.445043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.128 [2024-07-15 18:42:07.445059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.128 [2024-07-15 18:42:07.445066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.128 [2024-07-15 18:42:07.445248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.128 [2024-07-15 18:42:07.445436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.128 [2024-07-15 18:42:07.445445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.128 [2024-07-15 18:42:07.445452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.128 [2024-07-15 18:42:07.448378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.128 [2024-07-15 18:42:07.457556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.128 [2024-07-15 18:42:07.457978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.128 [2024-07-15 18:42:07.457993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.128 [2024-07-15 18:42:07.458000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.128 [2024-07-15 18:42:07.458172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.128 [2024-07-15 18:42:07.458354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.128 [2024-07-15 18:42:07.458362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.128 [2024-07-15 18:42:07.458368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.128 [2024-07-15 18:42:07.461114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.128 [2024-07-15 18:42:07.470741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.128 [2024-07-15 18:42:07.471146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.128 [2024-07-15 18:42:07.471162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.128 [2024-07-15 18:42:07.471168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.128 [2024-07-15 18:42:07.471349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.128 [2024-07-15 18:42:07.471521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.128 [2024-07-15 18:42:07.471529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.128 [2024-07-15 18:42:07.471534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.128 [2024-07-15 18:42:07.474359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.128 [2024-07-15 18:42:07.483783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.128 [2024-07-15 18:42:07.484208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.128 [2024-07-15 18:42:07.484223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.128 [2024-07-15 18:42:07.484230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.128 [2024-07-15 18:42:07.484427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.128 [2024-07-15 18:42:07.484610] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.128 [2024-07-15 18:42:07.484618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.128 [2024-07-15 18:42:07.484625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.128 [2024-07-15 18:42:07.487459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.128 [2024-07-15 18:42:07.496752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.128 [2024-07-15 18:42:07.497185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.128 [2024-07-15 18:42:07.497200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.128 [2024-07-15 18:42:07.497207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.128 [2024-07-15 18:42:07.497384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.128 [2024-07-15 18:42:07.497556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.128 [2024-07-15 18:42:07.497564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.128 [2024-07-15 18:42:07.497569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.128 [2024-07-15 18:42:07.500313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.128 [2024-07-15 18:42:07.509730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.128 [2024-07-15 18:42:07.510110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.128 [2024-07-15 18:42:07.510125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.128 [2024-07-15 18:42:07.510132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.128 [2024-07-15 18:42:07.510303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.128 [2024-07-15 18:42:07.510481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.128 [2024-07-15 18:42:07.510489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.128 [2024-07-15 18:42:07.510498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.128 [2024-07-15 18:42:07.513242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.128 [2024-07-15 18:42:07.522732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.128 [2024-07-15 18:42:07.523152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.128 [2024-07-15 18:42:07.523167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.128 [2024-07-15 18:42:07.523174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.128 [2024-07-15 18:42:07.523351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.128 [2024-07-15 18:42:07.523523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.128 [2024-07-15 18:42:07.523531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.128 [2024-07-15 18:42:07.523537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.128 [2024-07-15 18:42:07.526284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.128 [2024-07-15 18:42:07.535890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.128 [2024-07-15 18:42:07.536330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.128 [2024-07-15 18:42:07.536354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.128 [2024-07-15 18:42:07.536361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.128 [2024-07-15 18:42:07.536543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.128 [2024-07-15 18:42:07.536725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.128 [2024-07-15 18:42:07.536733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.128 [2024-07-15 18:42:07.536740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.128 [2024-07-15 18:42:07.539658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.128 [2024-07-15 18:42:07.549043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.128 [2024-07-15 18:42:07.549490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.128 [2024-07-15 18:42:07.549507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.128 [2024-07-15 18:42:07.549514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.128 [2024-07-15 18:42:07.549695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.128 [2024-07-15 18:42:07.549878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.128 [2024-07-15 18:42:07.549886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.128 [2024-07-15 18:42:07.549893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.128 [2024-07-15 18:42:07.552819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.128 [2024-07-15 18:42:07.562062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.128 [2024-07-15 18:42:07.562520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.128 [2024-07-15 18:42:07.562536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.128 [2024-07-15 18:42:07.562542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.128 [2024-07-15 18:42:07.562724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.128 [2024-07-15 18:42:07.562907] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.128 [2024-07-15 18:42:07.562915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.128 [2024-07-15 18:42:07.562921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.128 [2024-07-15 18:42:07.565849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.128 [2024-07-15 18:42:07.575236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.128 [2024-07-15 18:42:07.575687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.128 [2024-07-15 18:42:07.575702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.128 [2024-07-15 18:42:07.575709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.128 [2024-07-15 18:42:07.575881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.128 [2024-07-15 18:42:07.576052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.128 [2024-07-15 18:42:07.576060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.128 [2024-07-15 18:42:07.576066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.128 [2024-07-15 18:42:07.578984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.128 [2024-07-15 18:42:07.588473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.128 [2024-07-15 18:42:07.588911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.128 [2024-07-15 18:42:07.588928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.128 [2024-07-15 18:42:07.588936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.128 [2024-07-15 18:42:07.589130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.128 [2024-07-15 18:42:07.589325] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.128 [2024-07-15 18:42:07.589333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.128 [2024-07-15 18:42:07.589348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.128 [2024-07-15 18:42:07.592410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.128 [2024-07-15 18:42:07.601607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.128 [2024-07-15 18:42:07.602043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.128 [2024-07-15 18:42:07.602060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.128 [2024-07-15 18:42:07.602067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.128 [2024-07-15 18:42:07.602252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.128 [2024-07-15 18:42:07.602443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.128 [2024-07-15 18:42:07.602452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.128 [2024-07-15 18:42:07.602458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.128 [2024-07-15 18:42:07.605375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.128 [2024-07-15 18:42:07.614680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.128 [2024-07-15 18:42:07.615116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.128 [2024-07-15 18:42:07.615132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.128 [2024-07-15 18:42:07.615138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.128 [2024-07-15 18:42:07.615320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.128 [2024-07-15 18:42:07.615509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.128 [2024-07-15 18:42:07.615518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.128 [2024-07-15 18:42:07.615524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.128 [2024-07-15 18:42:07.618445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.128 [2024-07-15 18:42:07.627854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.128 [2024-07-15 18:42:07.628310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.128 [2024-07-15 18:42:07.628325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.128 [2024-07-15 18:42:07.628332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.128 [2024-07-15 18:42:07.628519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.128 [2024-07-15 18:42:07.628704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.128 [2024-07-15 18:42:07.628712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.128 [2024-07-15 18:42:07.628719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.128 [2024-07-15 18:42:07.631638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.128 [2024-07-15 18:42:07.641106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.128 [2024-07-15 18:42:07.641535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.128 [2024-07-15 18:42:07.641552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.128 [2024-07-15 18:42:07.641559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.128 [2024-07-15 18:42:07.641741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.128 [2024-07-15 18:42:07.641923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.128 [2024-07-15 18:42:07.641931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.128 [2024-07-15 18:42:07.641941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.128 [2024-07-15 18:42:07.644868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.128 [2024-07-15 18:42:07.654108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.128 [2024-07-15 18:42:07.654493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.128 [2024-07-15 18:42:07.654509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.128 [2024-07-15 18:42:07.654516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.128 [2024-07-15 18:42:07.654698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.128 [2024-07-15 18:42:07.654881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.128 [2024-07-15 18:42:07.654889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.128 [2024-07-15 18:42:07.654896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.128 [2024-07-15 18:42:07.657766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.128 [2024-07-15 18:42:07.667188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.128 [2024-07-15 18:42:07.667520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.128 [2024-07-15 18:42:07.667562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.128 [2024-07-15 18:42:07.667584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.128 [2024-07-15 18:42:07.668099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.128 [2024-07-15 18:42:07.668272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.128 [2024-07-15 18:42:07.668279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.128 [2024-07-15 18:42:07.668285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.129 [2024-07-15 18:42:07.671041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.389 [2024-07-15 18:42:07.680291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.389 [2024-07-15 18:42:07.680746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.389 [2024-07-15 18:42:07.680798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.389 [2024-07-15 18:42:07.680805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.389 [2024-07-15 18:42:07.680976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.389 [2024-07-15 18:42:07.681147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.389 [2024-07-15 18:42:07.681155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.389 [2024-07-15 18:42:07.681161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.389 [2024-07-15 18:42:07.683918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.389 [2024-07-15 18:42:07.693237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.389 [2024-07-15 18:42:07.693663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.389 [2024-07-15 18:42:07.693682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.389 [2024-07-15 18:42:07.693689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.389 [2024-07-15 18:42:07.693860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.389 [2024-07-15 18:42:07.694033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.389 [2024-07-15 18:42:07.694040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.389 [2024-07-15 18:42:07.694046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.389 [2024-07-15 18:42:07.696804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.389 [2024-07-15 18:42:07.706140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.389 [2024-07-15 18:42:07.706547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.389 [2024-07-15 18:42:07.706590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.389 [2024-07-15 18:42:07.706612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.389 [2024-07-15 18:42:07.707119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.389 [2024-07-15 18:42:07.707514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.389 [2024-07-15 18:42:07.707530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.389 [2024-07-15 18:42:07.707544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.389 [2024-07-15 18:42:07.713777] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.389 [2024-07-15 18:42:07.721021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.389 [2024-07-15 18:42:07.721507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.389 [2024-07-15 18:42:07.721528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.389 [2024-07-15 18:42:07.721537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.389 [2024-07-15 18:42:07.721790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.389 [2024-07-15 18:42:07.722043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.389 [2024-07-15 18:42:07.722054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.389 [2024-07-15 18:42:07.722063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.389 [2024-07-15 18:42:07.726120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.389 [2024-07-15 18:42:07.734068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.389 [2024-07-15 18:42:07.734502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.389 [2024-07-15 18:42:07.734546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.389 [2024-07-15 18:42:07.734567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.389 [2024-07-15 18:42:07.735082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.389 [2024-07-15 18:42:07.735257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.389 [2024-07-15 18:42:07.735265] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.389 [2024-07-15 18:42:07.735271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.389 [2024-07-15 18:42:07.738016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.389 [2024-07-15 18:42:07.747030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.389 [2024-07-15 18:42:07.747471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.389 [2024-07-15 18:42:07.747486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.389 [2024-07-15 18:42:07.747503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.389 [2024-07-15 18:42:07.747670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.389 [2024-07-15 18:42:07.747836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.389 [2024-07-15 18:42:07.747844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.389 [2024-07-15 18:42:07.747850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.389 [2024-07-15 18:42:07.750570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.389 [2024-07-15 18:42:07.760038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.390 [2024-07-15 18:42:07.760520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.390 [2024-07-15 18:42:07.760539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.390 [2024-07-15 18:42:07.760546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.390 [2024-07-15 18:42:07.760719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.390 [2024-07-15 18:42:07.760890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.390 [2024-07-15 18:42:07.760897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.390 [2024-07-15 18:42:07.760903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.390 [2024-07-15 18:42:07.763620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.390 [2024-07-15 18:42:07.772941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.390 [2024-07-15 18:42:07.773368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.390 [2024-07-15 18:42:07.773411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.390 [2024-07-15 18:42:07.773433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.390 [2024-07-15 18:42:07.774010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.390 [2024-07-15 18:42:07.774198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.390 [2024-07-15 18:42:07.774206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.390 [2024-07-15 18:42:07.774211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.390 [2024-07-15 18:42:07.776939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.390 [2024-07-15 18:42:07.785900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.390 [2024-07-15 18:42:07.786325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.390 [2024-07-15 18:42:07.786344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.390 [2024-07-15 18:42:07.786351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.390 [2024-07-15 18:42:07.786538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.390 [2024-07-15 18:42:07.786710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.390 [2024-07-15 18:42:07.786717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.390 [2024-07-15 18:42:07.786724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.390 [2024-07-15 18:42:07.789424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.390 [2024-07-15 18:42:07.798884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.390 [2024-07-15 18:42:07.799314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.390 [2024-07-15 18:42:07.799329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.390 [2024-07-15 18:42:07.799335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.390 [2024-07-15 18:42:07.799507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.390 [2024-07-15 18:42:07.799674] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.390 [2024-07-15 18:42:07.799681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.390 [2024-07-15 18:42:07.799687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.390 [2024-07-15 18:42:07.802350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.390 [2024-07-15 18:42:07.811852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.390 [2024-07-15 18:42:07.812198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.390 [2024-07-15 18:42:07.812213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.390 [2024-07-15 18:42:07.812220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.390 [2024-07-15 18:42:07.812397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.390 [2024-07-15 18:42:07.812578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.390 [2024-07-15 18:42:07.812586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.390 [2024-07-15 18:42:07.812592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.390 [2024-07-15 18:42:07.815295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.390 [2024-07-15 18:42:07.824814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.390 [2024-07-15 18:42:07.825162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.390 [2024-07-15 18:42:07.825177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.390 [2024-07-15 18:42:07.825187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.390 [2024-07-15 18:42:07.825359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.390 [2024-07-15 18:42:07.825526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.390 [2024-07-15 18:42:07.825533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.390 [2024-07-15 18:42:07.825539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.390 [2024-07-15 18:42:07.828244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.390 [2024-07-15 18:42:07.837683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.390 [2024-07-15 18:42:07.838073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.390 [2024-07-15 18:42:07.838088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.390 [2024-07-15 18:42:07.838094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.390 [2024-07-15 18:42:07.838260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.390 [2024-07-15 18:42:07.838431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.390 [2024-07-15 18:42:07.838439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.390 [2024-07-15 18:42:07.838445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.390 [2024-07-15 18:42:07.841102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.390 [2024-07-15 18:42:07.850599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.390 [2024-07-15 18:42:07.851024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.390 [2024-07-15 18:42:07.851065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.390 [2024-07-15 18:42:07.851086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.390 [2024-07-15 18:42:07.851537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.390 [2024-07-15 18:42:07.851704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.390 [2024-07-15 18:42:07.851712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.390 [2024-07-15 18:42:07.851718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.390 [2024-07-15 18:42:07.854441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.390 [2024-07-15 18:42:07.863693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.390 [2024-07-15 18:42:07.864141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.390 [2024-07-15 18:42:07.864156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.390 [2024-07-15 18:42:07.864163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.390 [2024-07-15 18:42:07.864334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.390 [2024-07-15 18:42:07.864522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.390 [2024-07-15 18:42:07.864533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.390 [2024-07-15 18:42:07.864539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.390 [2024-07-15 18:42:07.867197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.390 [2024-07-15 18:42:07.876628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.390 [2024-07-15 18:42:07.877047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.390 [2024-07-15 18:42:07.877062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.390 [2024-07-15 18:42:07.877069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.390 [2024-07-15 18:42:07.877234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.390 [2024-07-15 18:42:07.877406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.390 [2024-07-15 18:42:07.877415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.390 [2024-07-15 18:42:07.877420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.390 [2024-07-15 18:42:07.880118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.390 [2024-07-15 18:42:07.889452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.390 [2024-07-15 18:42:07.889869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.390 [2024-07-15 18:42:07.889884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.390 [2024-07-15 18:42:07.889891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.390 [2024-07-15 18:42:07.890057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.390 [2024-07-15 18:42:07.890222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.390 [2024-07-15 18:42:07.890229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.390 [2024-07-15 18:42:07.890235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.390 [2024-07-15 18:42:07.892942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.390 [2024-07-15 18:42:07.902502] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.390 [2024-07-15 18:42:07.902902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.390 [2024-07-15 18:42:07.902917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.390 [2024-07-15 18:42:07.902923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.390 [2024-07-15 18:42:07.903094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.391 [2024-07-15 18:42:07.903265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.391 [2024-07-15 18:42:07.903273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.391 [2024-07-15 18:42:07.903279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.391 [2024-07-15 18:42:07.906018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.391 [2024-07-15 18:42:07.915504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.391 [2024-07-15 18:42:07.915902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.391 [2024-07-15 18:42:07.915917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.391 [2024-07-15 18:42:07.915923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.391 [2024-07-15 18:42:07.916094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.391 [2024-07-15 18:42:07.916265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.391 [2024-07-15 18:42:07.916273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.391 [2024-07-15 18:42:07.916279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.391 [2024-07-15 18:42:07.919023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.391 [2024-07-15 18:42:07.928369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.391 [2024-07-15 18:42:07.928747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.391 [2024-07-15 18:42:07.928788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.391 [2024-07-15 18:42:07.928810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.391 [2024-07-15 18:42:07.929402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.391 [2024-07-15 18:42:07.929831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.391 [2024-07-15 18:42:07.929839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.391 [2024-07-15 18:42:07.929845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.391 [2024-07-15 18:42:07.936073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.391 [2024-07-15 18:42:07.943458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.391 [2024-07-15 18:42:07.943978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.391 [2024-07-15 18:42:07.943998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.391 [2024-07-15 18:42:07.944007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.391 [2024-07-15 18:42:07.944260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.650 [2024-07-15 18:42:07.944521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.650 [2024-07-15 18:42:07.944533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.650 [2024-07-15 18:42:07.944541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.650 [2024-07-15 18:42:07.948594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.650 [2024-07-15 18:42:07.956321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.650 [2024-07-15 18:42:07.956752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.650 [2024-07-15 18:42:07.956795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.650 [2024-07-15 18:42:07.956816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.650 [2024-07-15 18:42:07.957417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.650 [2024-07-15 18:42:07.957673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.650 [2024-07-15 18:42:07.957680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.650 [2024-07-15 18:42:07.957686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.650 [2024-07-15 18:42:07.960380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.650 [2024-07-15 18:42:07.969052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.650 [2024-07-15 18:42:07.969494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.650 [2024-07-15 18:42:07.969509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.650 [2024-07-15 18:42:07.969516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.650 [2024-07-15 18:42:07.969682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.650 [2024-07-15 18:42:07.969848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.650 [2024-07-15 18:42:07.969856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.650 [2024-07-15 18:42:07.969861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.650 [2024-07-15 18:42:07.972531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.650 [2024-07-15 18:42:07.981775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.650 [2024-07-15 18:42:07.982101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.650 [2024-07-15 18:42:07.982116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.650 [2024-07-15 18:42:07.982123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.650 [2024-07-15 18:42:07.982288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.650 [2024-07-15 18:42:07.982461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.650 [2024-07-15 18:42:07.982469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.650 [2024-07-15 18:42:07.982475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.650 [2024-07-15 18:42:07.985175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.650 [2024-07-15 18:42:07.994534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.650 [2024-07-15 18:42:07.994970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.650 [2024-07-15 18:42:07.994985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.650 [2024-07-15 18:42:07.994991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.650 [2024-07-15 18:42:07.995157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.650 [2024-07-15 18:42:07.995323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.650 [2024-07-15 18:42:07.995330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.650 [2024-07-15 18:42:07.995345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.650 [2024-07-15 18:42:07.998051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.650 [2024-07-15 18:42:08.007367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.650 [2024-07-15 18:42:08.007774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.650 [2024-07-15 18:42:08.007789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.650 [2024-07-15 18:42:08.007795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.650 [2024-07-15 18:42:08.007961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.650 [2024-07-15 18:42:08.008128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.650 [2024-07-15 18:42:08.008135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.650 [2024-07-15 18:42:08.008141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.650 [2024-07-15 18:42:08.010747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.650 [2024-07-15 18:42:08.020293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.650 [2024-07-15 18:42:08.020745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.650 [2024-07-15 18:42:08.020788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.650 [2024-07-15 18:42:08.020812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.650 [2024-07-15 18:42:08.021412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.650 [2024-07-15 18:42:08.021703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.650 [2024-07-15 18:42:08.021711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.650 [2024-07-15 18:42:08.021717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.650 [2024-07-15 18:42:08.024381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.650 [2024-07-15 18:42:08.033178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.650 [2024-07-15 18:42:08.033459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.650 [2024-07-15 18:42:08.033475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.650 [2024-07-15 18:42:08.033482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.650 [2024-07-15 18:42:08.033648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.650 [2024-07-15 18:42:08.033814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.650 [2024-07-15 18:42:08.033822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.650 [2024-07-15 18:42:08.033828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.650 [2024-07-15 18:42:08.036492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.650 [2024-07-15 18:42:08.046081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.650 [2024-07-15 18:42:08.046440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.650 [2024-07-15 18:42:08.046455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.650 [2024-07-15 18:42:08.046462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.650 [2024-07-15 18:42:08.046629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.650 [2024-07-15 18:42:08.046795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.650 [2024-07-15 18:42:08.046803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.650 [2024-07-15 18:42:08.046809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.650 [2024-07-15 18:42:08.049473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.650 [2024-07-15 18:42:08.058932] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.650 [2024-07-15 18:42:08.059366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.650 [2024-07-15 18:42:08.059382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.650 [2024-07-15 18:42:08.059389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.650 [2024-07-15 18:42:08.059555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.650 [2024-07-15 18:42:08.059721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.650 [2024-07-15 18:42:08.059729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.650 [2024-07-15 18:42:08.059735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.650 [2024-07-15 18:42:08.062399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.651 [2024-07-15 18:42:08.071881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.651 [2024-07-15 18:42:08.072333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.651 [2024-07-15 18:42:08.072356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.651 [2024-07-15 18:42:08.072363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.651 [2024-07-15 18:42:08.072545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.651 [2024-07-15 18:42:08.072711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.651 [2024-07-15 18:42:08.072719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.651 [2024-07-15 18:42:08.072726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.651 [2024-07-15 18:42:08.075392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.651 [2024-07-15 18:42:08.084853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.651 [2024-07-15 18:42:08.085243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.651 [2024-07-15 18:42:08.085284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.651 [2024-07-15 18:42:08.085306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.651 [2024-07-15 18:42:08.085853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.651 [2024-07-15 18:42:08.086024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.651 [2024-07-15 18:42:08.086032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.651 [2024-07-15 18:42:08.086037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.651 [2024-07-15 18:42:08.088699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.651 [2024-07-15 18:42:08.097729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.651 [2024-07-15 18:42:08.098149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.651 [2024-07-15 18:42:08.098163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.651 [2024-07-15 18:42:08.098170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.651 [2024-07-15 18:42:08.098344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.651 [2024-07-15 18:42:08.098511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.651 [2024-07-15 18:42:08.098518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.651 [2024-07-15 18:42:08.098524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.651 [2024-07-15 18:42:08.101184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.651 [2024-07-15 18:42:08.110620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.651 [2024-07-15 18:42:08.111034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.651 [2024-07-15 18:42:08.111049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.651 [2024-07-15 18:42:08.111055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.651 [2024-07-15 18:42:08.111221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.651 [2024-07-15 18:42:08.111393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.651 [2024-07-15 18:42:08.111401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.651 [2024-07-15 18:42:08.111407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.651 [2024-07-15 18:42:08.114066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.651 [2024-07-15 18:42:08.123439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.651 [2024-07-15 18:42:08.123881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.651 [2024-07-15 18:42:08.123896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.651 [2024-07-15 18:42:08.123903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.651 [2024-07-15 18:42:08.124069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.651 [2024-07-15 18:42:08.124235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.651 [2024-07-15 18:42:08.124243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.651 [2024-07-15 18:42:08.124248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.651 [2024-07-15 18:42:08.126916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.651 [2024-07-15 18:42:08.136340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.651 [2024-07-15 18:42:08.136741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.651 [2024-07-15 18:42:08.136755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.651 [2024-07-15 18:42:08.136762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.651 [2024-07-15 18:42:08.136928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.651 [2024-07-15 18:42:08.137094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.651 [2024-07-15 18:42:08.137102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.651 [2024-07-15 18:42:08.137108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.651 [2024-07-15 18:42:08.139773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.651 [2024-07-15 18:42:08.149210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.651 [2024-07-15 18:42:08.149584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.651 [2024-07-15 18:42:08.149600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.651 [2024-07-15 18:42:08.149607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.651 [2024-07-15 18:42:08.149779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.651 [2024-07-15 18:42:08.149950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.651 [2024-07-15 18:42:08.149957] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.651 [2024-07-15 18:42:08.149963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.651 [2024-07-15 18:42:08.152710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.651 [2024-07-15 18:42:08.162199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.651 [2024-07-15 18:42:08.162637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.651 [2024-07-15 18:42:08.162652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.651 [2024-07-15 18:42:08.162659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.651 [2024-07-15 18:42:08.162830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.651 [2024-07-15 18:42:08.163002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.651 [2024-07-15 18:42:08.163009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.651 [2024-07-15 18:42:08.163015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.651 [2024-07-15 18:42:08.165729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.651 [2024-07-15 18:42:08.175200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.651 [2024-07-15 18:42:08.175662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.651 [2024-07-15 18:42:08.175678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.651 [2024-07-15 18:42:08.175687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.651 [2024-07-15 18:42:08.175858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.651 [2024-07-15 18:42:08.176030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.651 [2024-07-15 18:42:08.176037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.651 [2024-07-15 18:42:08.176043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.651 [2024-07-15 18:42:08.178761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.651 [2024-07-15 18:42:08.188193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.651 [2024-07-15 18:42:08.188546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.651 [2024-07-15 18:42:08.188561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.651 [2024-07-15 18:42:08.188567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.651 [2024-07-15 18:42:08.188733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.651 [2024-07-15 18:42:08.188898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.651 [2024-07-15 18:42:08.188906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.651 [2024-07-15 18:42:08.188912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.651 [2024-07-15 18:42:08.191580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.651 [2024-07-15 18:42:08.201165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.651 [2024-07-15 18:42:08.201587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.651 [2024-07-15 18:42:08.201602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.651 [2024-07-15 18:42:08.201609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.651 [2024-07-15 18:42:08.201774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.651 [2024-07-15 18:42:08.201940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.651 [2024-07-15 18:42:08.201948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.651 [2024-07-15 18:42:08.201954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.651 [2024-07-15 18:42:08.204711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.910 [2024-07-15 18:42:08.214215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.910 [2024-07-15 18:42:08.214656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.910 [2024-07-15 18:42:08.214671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.910 [2024-07-15 18:42:08.214678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.910 [2024-07-15 18:42:08.214849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.910 [2024-07-15 18:42:08.215023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.910 [2024-07-15 18:42:08.215031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.910 [2024-07-15 18:42:08.215037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.910 [2024-07-15 18:42:08.217729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.910 [2024-07-15 18:42:08.227187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.910 [2024-07-15 18:42:08.227612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.910 [2024-07-15 18:42:08.227655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.910 [2024-07-15 18:42:08.227676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.910 [2024-07-15 18:42:08.228254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.910 [2024-07-15 18:42:08.228468] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.910 [2024-07-15 18:42:08.228476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.910 [2024-07-15 18:42:08.228482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.910 [2024-07-15 18:42:08.231144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.910 [2024-07-15 18:42:08.240102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.910 [2024-07-15 18:42:08.240524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.910 [2024-07-15 18:42:08.240540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.910 [2024-07-15 18:42:08.240546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.910 [2024-07-15 18:42:08.240712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.910 [2024-07-15 18:42:08.240878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.910 [2024-07-15 18:42:08.240886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.910 [2024-07-15 18:42:08.240892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.910 [2024-07-15 18:42:08.243560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.910 [2024-07-15 18:42:08.252990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.910 [2024-07-15 18:42:08.253421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.910 [2024-07-15 18:42:08.253462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.910 [2024-07-15 18:42:08.253483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.910 [2024-07-15 18:42:08.254061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.910 [2024-07-15 18:42:08.254507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.910 [2024-07-15 18:42:08.254515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.910 [2024-07-15 18:42:08.254521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.910 [2024-07-15 18:42:08.257183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.910 [2024-07-15 18:42:08.265988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.910 [2024-07-15 18:42:08.266405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.910 [2024-07-15 18:42:08.266420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.910 [2024-07-15 18:42:08.266427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.910 [2024-07-15 18:42:08.266592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.910 [2024-07-15 18:42:08.266758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.910 [2024-07-15 18:42:08.266766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.910 [2024-07-15 18:42:08.266772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.910 [2024-07-15 18:42:08.269438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.910 [2024-07-15 18:42:08.278888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.910 [2024-07-15 18:42:08.279246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.910 [2024-07-15 18:42:08.279287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.911 [2024-07-15 18:42:08.279308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.911 [2024-07-15 18:42:08.279907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.911 [2024-07-15 18:42:08.280496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.911 [2024-07-15 18:42:08.280505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.911 [2024-07-15 18:42:08.280511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.911 [2024-07-15 18:42:08.283184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.911 [2024-07-15 18:42:08.291605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.911 [2024-07-15 18:42:08.292022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.911 [2024-07-15 18:42:08.292062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.911 [2024-07-15 18:42:08.292084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.911 [2024-07-15 18:42:08.292552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.911 [2024-07-15 18:42:08.292719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.911 [2024-07-15 18:42:08.292727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.911 [2024-07-15 18:42:08.292733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.911 [2024-07-15 18:42:08.295332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.911 [2024-07-15 18:42:08.304369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.911 [2024-07-15 18:42:08.304733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.911 [2024-07-15 18:42:08.304749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.911 [2024-07-15 18:42:08.304759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.911 [2024-07-15 18:42:08.304925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.911 [2024-07-15 18:42:08.305091] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.911 [2024-07-15 18:42:08.305099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.911 [2024-07-15 18:42:08.305104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.911 [2024-07-15 18:42:08.307709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.911 [2024-07-15 18:42:08.317165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.911 [2024-07-15 18:42:08.317463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.911 [2024-07-15 18:42:08.317479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.911 [2024-07-15 18:42:08.317486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.911 [2024-07-15 18:42:08.317652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.911 [2024-07-15 18:42:08.317817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.911 [2024-07-15 18:42:08.317825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.911 [2024-07-15 18:42:08.317831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.911 [2024-07-15 18:42:08.320436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.911 [2024-07-15 18:42:08.329908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.911 [2024-07-15 18:42:08.330242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.911 [2024-07-15 18:42:08.330257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.911 [2024-07-15 18:42:08.330264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.911 [2024-07-15 18:42:08.330435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.911 [2024-07-15 18:42:08.330602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.911 [2024-07-15 18:42:08.330609] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.911 [2024-07-15 18:42:08.330615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.911 [2024-07-15 18:42:08.333217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.911 [2024-07-15 18:42:08.342698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.911 [2024-07-15 18:42:08.342996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.911 [2024-07-15 18:42:08.343011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.911 [2024-07-15 18:42:08.343018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.911 [2024-07-15 18:42:08.343184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.911 [2024-07-15 18:42:08.343359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.911 [2024-07-15 18:42:08.343369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.911 [2024-07-15 18:42:08.343375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.911 [2024-07-15 18:42:08.345975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.911 [2024-07-15 18:42:08.355457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.911 [2024-07-15 18:42:08.355748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.911 [2024-07-15 18:42:08.355763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.911 [2024-07-15 18:42:08.355770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.911 [2024-07-15 18:42:08.355936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.911 [2024-07-15 18:42:08.356102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.911 [2024-07-15 18:42:08.356109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.911 [2024-07-15 18:42:08.356115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.911 [2024-07-15 18:42:08.358721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.911 [2024-07-15 18:42:08.368208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.911 [2024-07-15 18:42:08.368571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.911 [2024-07-15 18:42:08.368586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.911 [2024-07-15 18:42:08.368594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.911 [2024-07-15 18:42:08.368762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.911 [2024-07-15 18:42:08.368927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.911 [2024-07-15 18:42:08.368935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.911 [2024-07-15 18:42:08.368941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.911 [2024-07-15 18:42:08.371547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.911 [2024-07-15 18:42:08.381022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.911 [2024-07-15 18:42:08.381300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.911 [2024-07-15 18:42:08.381314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.911 [2024-07-15 18:42:08.381321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.911 [2024-07-15 18:42:08.381494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.911 [2024-07-15 18:42:08.381661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.911 [2024-07-15 18:42:08.381669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.911 [2024-07-15 18:42:08.381674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.911 [2024-07-15 18:42:08.384350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.911 [2024-07-15 18:42:08.393817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.911 [2024-07-15 18:42:08.394110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.911 [2024-07-15 18:42:08.394125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.911 [2024-07-15 18:42:08.394131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.911 [2024-07-15 18:42:08.394297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.911 [2024-07-15 18:42:08.394468] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.911 [2024-07-15 18:42:08.394476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.911 [2024-07-15 18:42:08.394482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.911 [2024-07-15 18:42:08.397139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.911 [2024-07-15 18:42:08.406607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.911 [2024-07-15 18:42:08.406901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.911 [2024-07-15 18:42:08.406916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.911 [2024-07-15 18:42:08.406923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.911 [2024-07-15 18:42:08.407088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.911 [2024-07-15 18:42:08.407255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.911 [2024-07-15 18:42:08.407263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.911 [2024-07-15 18:42:08.407269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.911 [2024-07-15 18:42:08.410146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.911 [2024-07-15 18:42:08.419497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.911 [2024-07-15 18:42:08.419773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.912 [2024-07-15 18:42:08.419788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.912 [2024-07-15 18:42:08.419795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.912 [2024-07-15 18:42:08.419962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.912 [2024-07-15 18:42:08.420129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.912 [2024-07-15 18:42:08.420136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.912 [2024-07-15 18:42:08.420142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.912 [2024-07-15 18:42:08.422814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.912 [2024-07-15 18:42:08.432239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.912 [2024-07-15 18:42:08.432534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.912 [2024-07-15 18:42:08.432549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.912 [2024-07-15 18:42:08.432556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.912 [2024-07-15 18:42:08.432724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.912 [2024-07-15 18:42:08.432890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.912 [2024-07-15 18:42:08.432898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.912 [2024-07-15 18:42:08.432904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.912 [2024-07-15 18:42:08.435511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.912 [2024-07-15 18:42:08.444983] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.912 [2024-07-15 18:42:08.445327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.912 [2024-07-15 18:42:08.445382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.912 [2024-07-15 18:42:08.445404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.912 [2024-07-15 18:42:08.445868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.912 [2024-07-15 18:42:08.446034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.912 [2024-07-15 18:42:08.446042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.912 [2024-07-15 18:42:08.446048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.912 [2024-07-15 18:42:08.448653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.912 [2024-07-15 18:42:08.457836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.912 [2024-07-15 18:42:08.458164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.912 [2024-07-15 18:42:08.458179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:22.912 [2024-07-15 18:42:08.458186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:22.912 [2024-07-15 18:42:08.458357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:22.912 [2024-07-15 18:42:08.458524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.912 [2024-07-15 18:42:08.458531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.912 [2024-07-15 18:42:08.458538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.912 [2024-07-15 18:42:08.461141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.171 [2024-07-15 18:42:08.470732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.171 [2024-07-15 18:42:08.471011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.171 [2024-07-15 18:42:08.471027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.171 [2024-07-15 18:42:08.471033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.171 [2024-07-15 18:42:08.471203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.171 [2024-07-15 18:42:08.471380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.171 [2024-07-15 18:42:08.471388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.171 [2024-07-15 18:42:08.471398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.171 [2024-07-15 18:42:08.474064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.171 [2024-07-15 18:42:08.483458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.171 [2024-07-15 18:42:08.483745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.171 [2024-07-15 18:42:08.483760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.171 [2024-07-15 18:42:08.483767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.171 [2024-07-15 18:42:08.483933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.171 [2024-07-15 18:42:08.484099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.171 [2024-07-15 18:42:08.484106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.171 [2024-07-15 18:42:08.484112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.171 [2024-07-15 18:42:08.486721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.171 [2024-07-15 18:42:08.496237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.171 [2024-07-15 18:42:08.496577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.171 [2024-07-15 18:42:08.496593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.171 [2024-07-15 18:42:08.496600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.171 [2024-07-15 18:42:08.496767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.171 [2024-07-15 18:42:08.496933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.171 [2024-07-15 18:42:08.496940] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.171 [2024-07-15 18:42:08.496946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.171 [2024-07-15 18:42:08.499551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.171 [2024-07-15 18:42:08.509032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.171 [2024-07-15 18:42:08.509369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.171 [2024-07-15 18:42:08.509384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.171 [2024-07-15 18:42:08.509391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.171 [2024-07-15 18:42:08.509558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.171 [2024-07-15 18:42:08.509724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.171 [2024-07-15 18:42:08.509732] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.171 [2024-07-15 18:42:08.509737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.171 [2024-07-15 18:42:08.512345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.171 [2024-07-15 18:42:08.521832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.171 [2024-07-15 18:42:08.522205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.171 [2024-07-15 18:42:08.522222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.171 [2024-07-15 18:42:08.522229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.171 [2024-07-15 18:42:08.522402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.171 [2024-07-15 18:42:08.522568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.171 [2024-07-15 18:42:08.522577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.171 [2024-07-15 18:42:08.522582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.171 [2024-07-15 18:42:08.525194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.171 [2024-07-15 18:42:08.534687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.171 [2024-07-15 18:42:08.535071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.171 [2024-07-15 18:42:08.535112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.171 [2024-07-15 18:42:08.535133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.172 [2024-07-15 18:42:08.535725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.172 [2024-07-15 18:42:08.536204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.172 [2024-07-15 18:42:08.536212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.172 [2024-07-15 18:42:08.536218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.172 [2024-07-15 18:42:08.542281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.172 [2024-07-15 18:42:08.549790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.172 [2024-07-15 18:42:08.550287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.172 [2024-07-15 18:42:08.550328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.172 [2024-07-15 18:42:08.550366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.172 [2024-07-15 18:42:08.550938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.172 [2024-07-15 18:42:08.551191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.172 [2024-07-15 18:42:08.551202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.172 [2024-07-15 18:42:08.551210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.172 [2024-07-15 18:42:08.555269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.172 [2024-07-15 18:42:08.562771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.172 [2024-07-15 18:42:08.563173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.172 [2024-07-15 18:42:08.563188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.172 [2024-07-15 18:42:08.563194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.172 [2024-07-15 18:42:08.563368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.172 [2024-07-15 18:42:08.563539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.172 [2024-07-15 18:42:08.563549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.172 [2024-07-15 18:42:08.563555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.172 [2024-07-15 18:42:08.566221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.172 [2024-07-15 18:42:08.575554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.172 [2024-07-15 18:42:08.575975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.172 [2024-07-15 18:42:08.576018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.172 [2024-07-15 18:42:08.576039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.172 [2024-07-15 18:42:08.576632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.172 [2024-07-15 18:42:08.577166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.172 [2024-07-15 18:42:08.577174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.172 [2024-07-15 18:42:08.577180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.172 [2024-07-15 18:42:08.583140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.172 [2024-07-15 18:42:08.590701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.172 [2024-07-15 18:42:08.591212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.172 [2024-07-15 18:42:08.591232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.172 [2024-07-15 18:42:08.591242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.172 [2024-07-15 18:42:08.591503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.172 [2024-07-15 18:42:08.591764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.172 [2024-07-15 18:42:08.591775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.172 [2024-07-15 18:42:08.591784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.172 [2024-07-15 18:42:08.595838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.172 [2024-07-15 18:42:08.603633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.172 [2024-07-15 18:42:08.603955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.172 [2024-07-15 18:42:08.603970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.172 [2024-07-15 18:42:08.603976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.172 [2024-07-15 18:42:08.604142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.172 [2024-07-15 18:42:08.604308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.172 [2024-07-15 18:42:08.604315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.172 [2024-07-15 18:42:08.604321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.172 [2024-07-15 18:42:08.606994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.172 [2024-07-15 18:42:08.616558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.172 [2024-07-15 18:42:08.616953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.172 [2024-07-15 18:42:08.616968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.172 [2024-07-15 18:42:08.616975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.172 [2024-07-15 18:42:08.617141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.172 [2024-07-15 18:42:08.617307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.172 [2024-07-15 18:42:08.617314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.172 [2024-07-15 18:42:08.617320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.172 [2024-07-15 18:42:08.619988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.172 [2024-07-15 18:42:08.629423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.172 [2024-07-15 18:42:08.629824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.172 [2024-07-15 18:42:08.629865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.172 [2024-07-15 18:42:08.629886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.172 [2024-07-15 18:42:08.630136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.172 [2024-07-15 18:42:08.630302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.172 [2024-07-15 18:42:08.630310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.172 [2024-07-15 18:42:08.630315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.172 [2024-07-15 18:42:08.632981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.172 [2024-07-15 18:42:08.642283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.172 [2024-07-15 18:42:08.642693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.172 [2024-07-15 18:42:08.642708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.172 [2024-07-15 18:42:08.642714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.172 [2024-07-15 18:42:08.642871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.172 [2024-07-15 18:42:08.643029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.172 [2024-07-15 18:42:08.643036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.172 [2024-07-15 18:42:08.643042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.172 [2024-07-15 18:42:08.645572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.172 [2024-07-15 18:42:08.655060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.172 [2024-07-15 18:42:08.655456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.172 [2024-07-15 18:42:08.655472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.172 [2024-07-15 18:42:08.655481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.172 [2024-07-15 18:42:08.655648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.172 [2024-07-15 18:42:08.655818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.172 [2024-07-15 18:42:08.655826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.172 [2024-07-15 18:42:08.655831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.172 [2024-07-15 18:42:08.658440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.172 [2024-07-15 18:42:08.668169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.172 [2024-07-15 18:42:08.668596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.172 [2024-07-15 18:42:08.668637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.172 [2024-07-15 18:42:08.668659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.172 [2024-07-15 18:42:08.669238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.172 [2024-07-15 18:42:08.669506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.172 [2024-07-15 18:42:08.669515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.172 [2024-07-15 18:42:08.669521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.172 [2024-07-15 18:42:08.672183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.172 [2024-07-15 18:42:08.681036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.172 [2024-07-15 18:42:08.681480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.172 [2024-07-15 18:42:08.681522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.173 [2024-07-15 18:42:08.681544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.173 [2024-07-15 18:42:08.682122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.173 [2024-07-15 18:42:08.682389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.173 [2024-07-15 18:42:08.682397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.173 [2024-07-15 18:42:08.682403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.173 [2024-07-15 18:42:08.685066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.173 [2024-07-15 18:42:08.693809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.173 [2024-07-15 18:42:08.694252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.173 [2024-07-15 18:42:08.694292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.173 [2024-07-15 18:42:08.694314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.173 [2024-07-15 18:42:08.694911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.173 [2024-07-15 18:42:08.695473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.173 [2024-07-15 18:42:08.695484] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.173 [2024-07-15 18:42:08.695491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.173 [2024-07-15 18:42:08.698091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.173 [2024-07-15 18:42:08.706546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.173 [2024-07-15 18:42:08.706837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.173 [2024-07-15 18:42:08.706852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.173 [2024-07-15 18:42:08.706858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.173 [2024-07-15 18:42:08.707024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.173 [2024-07-15 18:42:08.707191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.173 [2024-07-15 18:42:08.707199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.173 [2024-07-15 18:42:08.707205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.173 [2024-07-15 18:42:08.709817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.173 [2024-07-15 18:42:08.719444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.173 [2024-07-15 18:42:08.719872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.173 [2024-07-15 18:42:08.719886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.173 [2024-07-15 18:42:08.719893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.173 [2024-07-15 18:42:08.720060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.173 [2024-07-15 18:42:08.720228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.173 [2024-07-15 18:42:08.720236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.173 [2024-07-15 18:42:08.720242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.173 [2024-07-15 18:42:08.722890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.432 [2024-07-15 18:42:08.732383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.432 [2024-07-15 18:42:08.732784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.432 [2024-07-15 18:42:08.732800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.432 [2024-07-15 18:42:08.732808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.433 [2024-07-15 18:42:08.732981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.433 [2024-07-15 18:42:08.733154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.433 [2024-07-15 18:42:08.733162] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.433 [2024-07-15 18:42:08.733168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.433 [2024-07-15 18:42:08.735891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.433 [2024-07-15 18:42:08.745265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.433 [2024-07-15 18:42:08.745682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.433 [2024-07-15 18:42:08.745698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.433 [2024-07-15 18:42:08.745704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.433 [2024-07-15 18:42:08.745876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.433 [2024-07-15 18:42:08.746049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.433 [2024-07-15 18:42:08.746057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.433 [2024-07-15 18:42:08.746062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.433 [2024-07-15 18:42:08.748791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.433 [2024-07-15 18:42:08.758457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.433 [2024-07-15 18:42:08.758841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.433 [2024-07-15 18:42:08.758857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.433 [2024-07-15 18:42:08.758865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.433 [2024-07-15 18:42:08.759037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.433 [2024-07-15 18:42:08.759210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.433 [2024-07-15 18:42:08.759218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.433 [2024-07-15 18:42:08.759224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.433 [2024-07-15 18:42:08.762029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.433 [2024-07-15 18:42:08.771443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.433 [2024-07-15 18:42:08.771738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.433 [2024-07-15 18:42:08.771754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.433 [2024-07-15 18:42:08.771761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.433 [2024-07-15 18:42:08.771927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.433 [2024-07-15 18:42:08.772093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.433 [2024-07-15 18:42:08.772101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.433 [2024-07-15 18:42:08.772106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.433 [2024-07-15 18:42:08.774721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.433 [2024-07-15 18:42:08.784397] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.433 [2024-07-15 18:42:08.784825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.433 [2024-07-15 18:42:08.784840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.433 [2024-07-15 18:42:08.784851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.433 [2024-07-15 18:42:08.785022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.433 [2024-07-15 18:42:08.785193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.433 [2024-07-15 18:42:08.785201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.433 [2024-07-15 18:42:08.785207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.433 [2024-07-15 18:42:08.787959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.433 [2024-07-15 18:42:08.797364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.433 [2024-07-15 18:42:08.797644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.433 [2024-07-15 18:42:08.797659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.433 [2024-07-15 18:42:08.797665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.433 [2024-07-15 18:42:08.797836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.433 [2024-07-15 18:42:08.798008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.433 [2024-07-15 18:42:08.798016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.433 [2024-07-15 18:42:08.798022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.433 [2024-07-15 18:42:08.800777] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.433 [2024-07-15 18:42:08.810573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.433 [2024-07-15 18:42:08.810949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.433 [2024-07-15 18:42:08.810965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.433 [2024-07-15 18:42:08.810973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.433 [2024-07-15 18:42:08.811154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.433 [2024-07-15 18:42:08.811345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.433 [2024-07-15 18:42:08.811354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.433 [2024-07-15 18:42:08.811361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.433 [2024-07-15 18:42:08.814264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.433 [2024-07-15 18:42:08.823751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.433 [2024-07-15 18:42:08.824186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.433 [2024-07-15 18:42:08.824201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.433 [2024-07-15 18:42:08.824209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.433 [2024-07-15 18:42:08.824398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.433 [2024-07-15 18:42:08.824580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.433 [2024-07-15 18:42:08.824588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.433 [2024-07-15 18:42:08.824598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.433 [2024-07-15 18:42:08.827515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.433 [2024-07-15 18:42:08.836730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.433 [2024-07-15 18:42:08.837138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.433 [2024-07-15 18:42:08.837154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.433 [2024-07-15 18:42:08.837161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.433 [2024-07-15 18:42:08.837334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.433 [2024-07-15 18:42:08.837513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.433 [2024-07-15 18:42:08.837522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.433 [2024-07-15 18:42:08.837528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.433 [2024-07-15 18:42:08.840273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.433 [2024-07-15 18:42:08.849775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.433 [2024-07-15 18:42:08.850227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.433 [2024-07-15 18:42:08.850243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.433 [2024-07-15 18:42:08.850250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.433 [2024-07-15 18:42:08.850438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.433 [2024-07-15 18:42:08.850620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.433 [2024-07-15 18:42:08.850628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.433 [2024-07-15 18:42:08.850635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.433 [2024-07-15 18:42:08.853471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.433 [2024-07-15 18:42:08.862801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.433 [2024-07-15 18:42:08.863226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.433 [2024-07-15 18:42:08.863242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.433 [2024-07-15 18:42:08.863249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.433 [2024-07-15 18:42:08.863436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.433 [2024-07-15 18:42:08.863618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.433 [2024-07-15 18:42:08.863638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.433 [2024-07-15 18:42:08.863644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.433 [2024-07-15 18:42:08.866513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.433 [2024-07-15 18:42:08.875983] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.433 [2024-07-15 18:42:08.876397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.433 [2024-07-15 18:42:08.876414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.434 [2024-07-15 18:42:08.876421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.434 [2024-07-15 18:42:08.876603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.434 [2024-07-15 18:42:08.876786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.434 [2024-07-15 18:42:08.876794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.434 [2024-07-15 18:42:08.876800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.434 [2024-07-15 18:42:08.879717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.434 [2024-07-15 18:42:08.889166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.434 [2024-07-15 18:42:08.889576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.434 [2024-07-15 18:42:08.889593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.434 [2024-07-15 18:42:08.889600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.434 [2024-07-15 18:42:08.889794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.434 [2024-07-15 18:42:08.889989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.434 [2024-07-15 18:42:08.889998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.434 [2024-07-15 18:42:08.890005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.434 [2024-07-15 18:42:08.892960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.434 [2024-07-15 18:42:08.902351] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.434 [2024-07-15 18:42:08.902720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.434 [2024-07-15 18:42:08.902735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.434 [2024-07-15 18:42:08.902742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.434 [2024-07-15 18:42:08.902924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.434 [2024-07-15 18:42:08.903106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.434 [2024-07-15 18:42:08.903114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.434 [2024-07-15 18:42:08.903121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.434 [2024-07-15 18:42:08.906043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.434 [2024-07-15 18:42:08.915503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.434 [2024-07-15 18:42:08.915885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.434 [2024-07-15 18:42:08.915901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.434 [2024-07-15 18:42:08.915909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.434 [2024-07-15 18:42:08.916106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.434 [2024-07-15 18:42:08.916300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.434 [2024-07-15 18:42:08.916308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.434 [2024-07-15 18:42:08.916315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.434 [2024-07-15 18:42:08.919326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.434 [2024-07-15 18:42:08.928529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.434 [2024-07-15 18:42:08.928932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.434 [2024-07-15 18:42:08.928948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.434 [2024-07-15 18:42:08.928954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.434 [2024-07-15 18:42:08.929125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.434 [2024-07-15 18:42:08.929296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.434 [2024-07-15 18:42:08.929304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.434 [2024-07-15 18:42:08.929310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.434 [2024-07-15 18:42:08.932056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.434 [2024-07-15 18:42:08.941619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.434 [2024-07-15 18:42:08.941904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.434 [2024-07-15 18:42:08.941919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.434 [2024-07-15 18:42:08.941925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.434 [2024-07-15 18:42:08.942095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.434 [2024-07-15 18:42:08.942267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.434 [2024-07-15 18:42:08.942275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.434 [2024-07-15 18:42:08.942280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.434 [2024-07-15 18:42:08.944996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.434 [2024-07-15 18:42:08.954491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.434 [2024-07-15 18:42:08.954907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.434 [2024-07-15 18:42:08.954922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.434 [2024-07-15 18:42:08.954929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.434 [2024-07-15 18:42:08.955095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.434 [2024-07-15 18:42:08.955261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.434 [2024-07-15 18:42:08.955268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.434 [2024-07-15 18:42:08.955277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.434 [2024-07-15 18:42:08.957942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.434 [2024-07-15 18:42:08.967390] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.434 [2024-07-15 18:42:08.967715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.434 [2024-07-15 18:42:08.967730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.434 [2024-07-15 18:42:08.967737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.434 [2024-07-15 18:42:08.967903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.434 [2024-07-15 18:42:08.968068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.434 [2024-07-15 18:42:08.968076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.434 [2024-07-15 18:42:08.968081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.434 [2024-07-15 18:42:08.970753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.434 [2024-07-15 18:42:08.980365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.434 [2024-07-15 18:42:08.980690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.434 [2024-07-15 18:42:08.980731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.434 [2024-07-15 18:42:08.980753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.434 [2024-07-15 18:42:08.981231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.434 [2024-07-15 18:42:08.981404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.434 [2024-07-15 18:42:08.981412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.434 [2024-07-15 18:42:08.981418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.434 [2024-07-15 18:42:08.984083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.694 [2024-07-15 18:42:08.993299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.694 [2024-07-15 18:42:08.993700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-15 18:42:08.993716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.694 [2024-07-15 18:42:08.993722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.694 [2024-07-15 18:42:08.993894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.694 [2024-07-15 18:42:08.994064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.694 [2024-07-15 18:42:08.994072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.694 [2024-07-15 18:42:08.994078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.694 [2024-07-15 18:42:08.996798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.694 [2024-07-15 18:42:09.006145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.694 [2024-07-15 18:42:09.006514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-15 18:42:09.006532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.694 [2024-07-15 18:42:09.006539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.694 [2024-07-15 18:42:09.006705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.694 [2024-07-15 18:42:09.006872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.694 [2024-07-15 18:42:09.006880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.694 [2024-07-15 18:42:09.006885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.694 [2024-07-15 18:42:09.009519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.694 [2024-07-15 18:42:09.018907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.694 [2024-07-15 18:42:09.019306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-15 18:42:09.019321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.694 [2024-07-15 18:42:09.019328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.694 [2024-07-15 18:42:09.019499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.694 [2024-07-15 18:42:09.019666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.694 [2024-07-15 18:42:09.019674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.694 [2024-07-15 18:42:09.019679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.694 [2024-07-15 18:42:09.022285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.694 [2024-07-15 18:42:09.031649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.694 [2024-07-15 18:42:09.031947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-15 18:42:09.031962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.694 [2024-07-15 18:42:09.031969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.694 [2024-07-15 18:42:09.032134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.694 [2024-07-15 18:42:09.032301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.694 [2024-07-15 18:42:09.032309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.694 [2024-07-15 18:42:09.032315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.694 [2024-07-15 18:42:09.034928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.694 [2024-07-15 18:42:09.044403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.694 [2024-07-15 18:42:09.044694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.694 [2024-07-15 18:42:09.044740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.694 [2024-07-15 18:42:09.044761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.694 [2024-07-15 18:42:09.045311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.694 [2024-07-15 18:42:09.045489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.694 [2024-07-15 18:42:09.045497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.694 [2024-07-15 18:42:09.045503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.694 [2024-07-15 18:42:09.048110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.695 [2024-07-15 18:42:09.057160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.695 [2024-07-15 18:42:09.057476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.695 [2024-07-15 18:42:09.057492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.695 [2024-07-15 18:42:09.057498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.695 [2024-07-15 18:42:09.057664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.695 [2024-07-15 18:42:09.057831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.695 [2024-07-15 18:42:09.057839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.695 [2024-07-15 18:42:09.057845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.695 [2024-07-15 18:42:09.060462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.695 [2024-07-15 18:42:09.069908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.695 [2024-07-15 18:42:09.070308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.695 [2024-07-15 18:42:09.070365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.695 [2024-07-15 18:42:09.070388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.695 [2024-07-15 18:42:09.070966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.695 [2024-07-15 18:42:09.071473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.695 [2024-07-15 18:42:09.071481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.695 [2024-07-15 18:42:09.071487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.695 [2024-07-15 18:42:09.074092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.695 [2024-07-15 18:42:09.082696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.695 [2024-07-15 18:42:09.083148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.695 [2024-07-15 18:42:09.083162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.695 [2024-07-15 18:42:09.083169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.695 [2024-07-15 18:42:09.083335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.695 [2024-07-15 18:42:09.083509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.695 [2024-07-15 18:42:09.083517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.695 [2024-07-15 18:42:09.083523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.695 [2024-07-15 18:42:09.086131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.695 [2024-07-15 18:42:09.095679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.695 [2024-07-15 18:42:09.095955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.695 [2024-07-15 18:42:09.095970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.695 [2024-07-15 18:42:09.095977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.695 [2024-07-15 18:42:09.096143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.695 [2024-07-15 18:42:09.096309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.695 [2024-07-15 18:42:09.096317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.695 [2024-07-15 18:42:09.096323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.695 [2024-07-15 18:42:09.099067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.695 [2024-07-15 18:42:09.108646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.695 [2024-07-15 18:42:09.108919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.695 [2024-07-15 18:42:09.108935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.695 [2024-07-15 18:42:09.108942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.695 [2024-07-15 18:42:09.109114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.695 [2024-07-15 18:42:09.109289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.695 [2024-07-15 18:42:09.109297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.695 [2024-07-15 18:42:09.109303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.695 [2024-07-15 18:42:09.112062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.695 [2024-07-15 18:42:09.121640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.695 [2024-07-15 18:42:09.122079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.695 [2024-07-15 18:42:09.122096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.695 [2024-07-15 18:42:09.122103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.695 [2024-07-15 18:42:09.122286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.695 [2024-07-15 18:42:09.122478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.695 [2024-07-15 18:42:09.122488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.695 [2024-07-15 18:42:09.122494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.695 [2024-07-15 18:42:09.125414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.695 [2024-07-15 18:42:09.134917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.695 [2024-07-15 18:42:09.135365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.695 [2024-07-15 18:42:09.135383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.695 [2024-07-15 18:42:09.135394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.695 [2024-07-15 18:42:09.135588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.695 [2024-07-15 18:42:09.135782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.695 [2024-07-15 18:42:09.135790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.695 [2024-07-15 18:42:09.135797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.695 [2024-07-15 18:42:09.138770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.695 [2024-07-15 18:42:09.148066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.695 [2024-07-15 18:42:09.148444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.695 [2024-07-15 18:42:09.148462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.695 [2024-07-15 18:42:09.148469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.695 [2024-07-15 18:42:09.148663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.695 [2024-07-15 18:42:09.148858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.696 [2024-07-15 18:42:09.148866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.696 [2024-07-15 18:42:09.148873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.696 [2024-07-15 18:42:09.151841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.696 [2024-07-15 18:42:09.161576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.696 [2024-07-15 18:42:09.162009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.696 [2024-07-15 18:42:09.162025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.696 [2024-07-15 18:42:09.162033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.696 [2024-07-15 18:42:09.162227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.696 [2024-07-15 18:42:09.162429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.696 [2024-07-15 18:42:09.162439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.696 [2024-07-15 18:42:09.162445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.696 [2024-07-15 18:42:09.165448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.696 [2024-07-15 18:42:09.174640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.696 [2024-07-15 18:42:09.175047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.696 [2024-07-15 18:42:09.175063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.696 [2024-07-15 18:42:09.175070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.696 [2024-07-15 18:42:09.175241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.696 [2024-07-15 18:42:09.175420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.696 [2024-07-15 18:42:09.175432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.696 [2024-07-15 18:42:09.175439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.696 [2024-07-15 18:42:09.178244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.696 [2024-07-15 18:42:09.187647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.696 [2024-07-15 18:42:09.188053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.696 [2024-07-15 18:42:09.188068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.696 [2024-07-15 18:42:09.188075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.696 [2024-07-15 18:42:09.188245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.696 [2024-07-15 18:42:09.188423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.696 [2024-07-15 18:42:09.188432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.696 [2024-07-15 18:42:09.188438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.696 [2024-07-15 18:42:09.191180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.696 [2024-07-15 18:42:09.200514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.696 [2024-07-15 18:42:09.200872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.696 [2024-07-15 18:42:09.200887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.696 [2024-07-15 18:42:09.200894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.696 [2024-07-15 18:42:09.201065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.696 [2024-07-15 18:42:09.201237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.696 [2024-07-15 18:42:09.201245] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.696 [2024-07-15 18:42:09.201251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.696 [2024-07-15 18:42:09.203969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.696 [2024-07-15 18:42:09.213418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.696 [2024-07-15 18:42:09.213836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.696 [2024-07-15 18:42:09.213851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.696 [2024-07-15 18:42:09.213858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.696 [2024-07-15 18:42:09.214024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.696 [2024-07-15 18:42:09.214191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.696 [2024-07-15 18:42:09.214198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.696 [2024-07-15 18:42:09.214204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.696 [2024-07-15 18:42:09.216874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.696 [2024-07-15 18:42:09.226503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.696 [2024-07-15 18:42:09.226918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.696 [2024-07-15 18:42:09.226934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.696 [2024-07-15 18:42:09.226940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.696 [2024-07-15 18:42:09.227111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.696 [2024-07-15 18:42:09.227283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.696 [2024-07-15 18:42:09.227291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.696 [2024-07-15 18:42:09.227297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.696 [2024-07-15 18:42:09.230044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.696 [2024-07-15 18:42:09.239598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.696 [2024-07-15 18:42:09.239998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.696 [2024-07-15 18:42:09.240013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.696 [2024-07-15 18:42:09.240020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.696 [2024-07-15 18:42:09.240191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.696 [2024-07-15 18:42:09.240368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.696 [2024-07-15 18:42:09.240376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.696 [2024-07-15 18:42:09.240382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.696 [2024-07-15 18:42:09.243122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.956 [2024-07-15 18:42:09.252689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.956 [2024-07-15 18:42:09.253116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.956 [2024-07-15 18:42:09.253131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.956 [2024-07-15 18:42:09.253138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.956 [2024-07-15 18:42:09.253309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.956 [2024-07-15 18:42:09.253486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.956 [2024-07-15 18:42:09.253505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.956 [2024-07-15 18:42:09.253511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.956 [2024-07-15 18:42:09.256253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.956 [2024-07-15 18:42:09.265600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.956 [2024-07-15 18:42:09.266019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.956 [2024-07-15 18:42:09.266034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.956 [2024-07-15 18:42:09.266041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.956 [2024-07-15 18:42:09.266210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.956 [2024-07-15 18:42:09.266383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.956 [2024-07-15 18:42:09.266392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.956 [2024-07-15 18:42:09.266397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.956 [2024-07-15 18:42:09.269002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.956 [2024-07-15 18:42:09.278346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.956 [2024-07-15 18:42:09.278739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.956 [2024-07-15 18:42:09.278754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.956 [2024-07-15 18:42:09.278761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.956 [2024-07-15 18:42:09.278926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.956 [2024-07-15 18:42:09.279092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.956 [2024-07-15 18:42:09.279100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.956 [2024-07-15 18:42:09.279106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.956 [2024-07-15 18:42:09.281716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.956 [2024-07-15 18:42:09.291147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.956 [2024-07-15 18:42:09.291517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.956 [2024-07-15 18:42:09.291533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.956 [2024-07-15 18:42:09.291540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.956 [2024-07-15 18:42:09.291708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.956 [2024-07-15 18:42:09.291874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.956 [2024-07-15 18:42:09.291882] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.956 [2024-07-15 18:42:09.291888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.956 [2024-07-15 18:42:09.294495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.956 [2024-07-15 18:42:09.303986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.956 [2024-07-15 18:42:09.304383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.956 [2024-07-15 18:42:09.304399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.956 [2024-07-15 18:42:09.304405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.956 [2024-07-15 18:42:09.304579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.956 [2024-07-15 18:42:09.304737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.956 [2024-07-15 18:42:09.304744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.956 [2024-07-15 18:42:09.304753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.956 [2024-07-15 18:42:09.307341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.956 [2024-07-15 18:42:09.316817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.956 [2024-07-15 18:42:09.317244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.956 [2024-07-15 18:42:09.317259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.956 [2024-07-15 18:42:09.317266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.956 [2024-07-15 18:42:09.317436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.956 [2024-07-15 18:42:09.317602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.956 [2024-07-15 18:42:09.317610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.956 [2024-07-15 18:42:09.317616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.956 [2024-07-15 18:42:09.320276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.956 [2024-07-15 18:42:09.329601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.956 [2024-07-15 18:42:09.330044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.956 [2024-07-15 18:42:09.330060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.956 [2024-07-15 18:42:09.330067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.956 [2024-07-15 18:42:09.330233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.956 [2024-07-15 18:42:09.330405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.956 [2024-07-15 18:42:09.330413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.956 [2024-07-15 18:42:09.330419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.956 [2024-07-15 18:42:09.333021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.956 [2024-07-15 18:42:09.342350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.956 [2024-07-15 18:42:09.342778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.956 [2024-07-15 18:42:09.342793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.956 [2024-07-15 18:42:09.342799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.956 [2024-07-15 18:42:09.342966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.956 [2024-07-15 18:42:09.343132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.956 [2024-07-15 18:42:09.343140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.956 [2024-07-15 18:42:09.343146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.956 [2024-07-15 18:42:09.345701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.956 [2024-07-15 18:42:09.355083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.956 [2024-07-15 18:42:09.355515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.956 [2024-07-15 18:42:09.355529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.956 [2024-07-15 18:42:09.355535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.956 [2024-07-15 18:42:09.355701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.956 [2024-07-15 18:42:09.355867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.956 [2024-07-15 18:42:09.355874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.956 [2024-07-15 18:42:09.355880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.956 [2024-07-15 18:42:09.358542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.957 [2024-07-15 18:42:09.368096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.957 [2024-07-15 18:42:09.368497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.957 [2024-07-15 18:42:09.368512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.957 [2024-07-15 18:42:09.368518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.957 [2024-07-15 18:42:09.368684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.957 [2024-07-15 18:42:09.368851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.957 [2024-07-15 18:42:09.368858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.957 [2024-07-15 18:42:09.368864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.957 [2024-07-15 18:42:09.371528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.957 [2024-07-15 18:42:09.380816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.957 [2024-07-15 18:42:09.381207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.957 [2024-07-15 18:42:09.381248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.957 [2024-07-15 18:42:09.381269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.957 [2024-07-15 18:42:09.381860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.957 [2024-07-15 18:42:09.382312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.957 [2024-07-15 18:42:09.382319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.957 [2024-07-15 18:42:09.382325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.957 [2024-07-15 18:42:09.384930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.957 [2024-07-15 18:42:09.393675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.957 [2024-07-15 18:42:09.394021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.957 [2024-07-15 18:42:09.394036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.957 [2024-07-15 18:42:09.394042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.957 [2024-07-15 18:42:09.394212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.957 [2024-07-15 18:42:09.394385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.957 [2024-07-15 18:42:09.394392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.957 [2024-07-15 18:42:09.394398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.957 [2024-07-15 18:42:09.397061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.957 [2024-07-15 18:42:09.406671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.957 [2024-07-15 18:42:09.407098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.957 [2024-07-15 18:42:09.407139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.957 [2024-07-15 18:42:09.407162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.957 [2024-07-15 18:42:09.407735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.957 [2024-07-15 18:42:09.408124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.957 [2024-07-15 18:42:09.408140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.957 [2024-07-15 18:42:09.408154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.957 [2024-07-15 18:42:09.414392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.957 [2024-07-15 18:42:09.421416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.957 [2024-07-15 18:42:09.421878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.957 [2024-07-15 18:42:09.421897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.957 [2024-07-15 18:42:09.421908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.957 [2024-07-15 18:42:09.422160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.957 [2024-07-15 18:42:09.422419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.957 [2024-07-15 18:42:09.422431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.957 [2024-07-15 18:42:09.422440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.957 [2024-07-15 18:42:09.426506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.957 [2024-07-15 18:42:09.434470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.957 [2024-07-15 18:42:09.434846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.957 [2024-07-15 18:42:09.434862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.957 [2024-07-15 18:42:09.434869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.957 [2024-07-15 18:42:09.435041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.957 [2024-07-15 18:42:09.435212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.957 [2024-07-15 18:42:09.435220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.957 [2024-07-15 18:42:09.435230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.957 [2024-07-15 18:42:09.437945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.957 [2024-07-15 18:42:09.447433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.957 [2024-07-15 18:42:09.447824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.957 [2024-07-15 18:42:09.447865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.957 [2024-07-15 18:42:09.447886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.957 [2024-07-15 18:42:09.448345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.957 [2024-07-15 18:42:09.448518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.957 [2024-07-15 18:42:09.448526] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.957 [2024-07-15 18:42:09.448532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.957 [2024-07-15 18:42:09.451242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.957 [2024-07-15 18:42:09.460419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.957 [2024-07-15 18:42:09.460811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.957 [2024-07-15 18:42:09.460826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.957 [2024-07-15 18:42:09.460833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.957 [2024-07-15 18:42:09.460998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.957 [2024-07-15 18:42:09.461164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.957 [2024-07-15 18:42:09.461172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.957 [2024-07-15 18:42:09.461177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.957 [2024-07-15 18:42:09.463913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.957 [2024-07-15 18:42:09.473240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.957 [2024-07-15 18:42:09.473678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.957 [2024-07-15 18:42:09.473694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.957 [2024-07-15 18:42:09.473700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.957 [2024-07-15 18:42:09.473866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.957 [2024-07-15 18:42:09.474032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.957 [2024-07-15 18:42:09.474040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.957 [2024-07-15 18:42:09.474046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.957 [2024-07-15 18:42:09.476655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.957 [2024-07-15 18:42:09.485996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.957 [2024-07-15 18:42:09.486488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.957 [2024-07-15 18:42:09.486538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.957 [2024-07-15 18:42:09.486559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.957 [2024-07-15 18:42:09.487128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.957 [2024-07-15 18:42:09.487296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.957 [2024-07-15 18:42:09.487303] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.957 [2024-07-15 18:42:09.487309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.957 [2024-07-15 18:42:09.489915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.957 [2024-07-15 18:42:09.498793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.957 [2024-07-15 18:42:09.499212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.957 [2024-07-15 18:42:09.499253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:23.957 [2024-07-15 18:42:09.499275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:23.957 [2024-07-15 18:42:09.499865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:23.957 [2024-07-15 18:42:09.500453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:23.957 [2024-07-15 18:42:09.500477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:23.958 [2024-07-15 18:42:09.500502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:23.958 [2024-07-15 18:42:09.503099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.958 [2024-07-15 18:42:09.511748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:23.958 [2024-07-15 18:42:09.512173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.216 [2024-07-15 18:42:09.512188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.216 [2024-07-15 18:42:09.512195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.216 [2024-07-15 18:42:09.512373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.216 [2024-07-15 18:42:09.512545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.216 [2024-07-15 18:42:09.512553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.216 [2024-07-15 18:42:09.512559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.216 [2024-07-15 18:42:09.515242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.216 [2024-07-15 18:42:09.524590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.216 [2024-07-15 18:42:09.525031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.216 [2024-07-15 18:42:09.525071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.216 [2024-07-15 18:42:09.525094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.216 [2024-07-15 18:42:09.525582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.216 [2024-07-15 18:42:09.525754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.216 [2024-07-15 18:42:09.525762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.216 [2024-07-15 18:42:09.525768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.216 [2024-07-15 18:42:09.528434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.216 [2024-07-15 18:42:09.537597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.216 [2024-07-15 18:42:09.538017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.216 [2024-07-15 18:42:09.538032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.216 [2024-07-15 18:42:09.538039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.216 [2024-07-15 18:42:09.538205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.216 [2024-07-15 18:42:09.538375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.216 [2024-07-15 18:42:09.538383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.216 [2024-07-15 18:42:09.538390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.216 [2024-07-15 18:42:09.541106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.216 [2024-07-15 18:42:09.550609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.216 [2024-07-15 18:42:09.551020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.216 [2024-07-15 18:42:09.551035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.217 [2024-07-15 18:42:09.551042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.217 [2024-07-15 18:42:09.551218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.217 [2024-07-15 18:42:09.551389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.217 [2024-07-15 18:42:09.551397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.217 [2024-07-15 18:42:09.551420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.217 [2024-07-15 18:42:09.554136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.217 [2024-07-15 18:42:09.563473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.217 [2024-07-15 18:42:09.563840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.217 [2024-07-15 18:42:09.563855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.217 [2024-07-15 18:42:09.563861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.217 [2024-07-15 18:42:09.564028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.217 [2024-07-15 18:42:09.564194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.217 [2024-07-15 18:42:09.564202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.217 [2024-07-15 18:42:09.564207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.217 [2024-07-15 18:42:09.566825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.217 [2024-07-15 18:42:09.576313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.217 [2024-07-15 18:42:09.576611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.217 [2024-07-15 18:42:09.576626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.217 [2024-07-15 18:42:09.576633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.217 [2024-07-15 18:42:09.576799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.217 [2024-07-15 18:42:09.576966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.217 [2024-07-15 18:42:09.576974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.217 [2024-07-15 18:42:09.576979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.217 [2024-07-15 18:42:09.579589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.217 [2024-07-15 18:42:09.589082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.217 [2024-07-15 18:42:09.589546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.217 [2024-07-15 18:42:09.589561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.217 [2024-07-15 18:42:09.589567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.217 [2024-07-15 18:42:09.589725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.217 [2024-07-15 18:42:09.589882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.217 [2024-07-15 18:42:09.589890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.217 [2024-07-15 18:42:09.589895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.217 [2024-07-15 18:42:09.592571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.217 [2024-07-15 18:42:09.601892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.217 [2024-07-15 18:42:09.602232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.217 [2024-07-15 18:42:09.602247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.217 [2024-07-15 18:42:09.602254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.217 [2024-07-15 18:42:09.602426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.217 [2024-07-15 18:42:09.602593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.217 [2024-07-15 18:42:09.602601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.217 [2024-07-15 18:42:09.602607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.217 [2024-07-15 18:42:09.605208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.217 [2024-07-15 18:42:09.614690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.217 [2024-07-15 18:42:09.615118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.217 [2024-07-15 18:42:09.615133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.217 [2024-07-15 18:42:09.615142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.217 [2024-07-15 18:42:09.615309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.217 [2024-07-15 18:42:09.615479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.217 [2024-07-15 18:42:09.615487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.217 [2024-07-15 18:42:09.615493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.217 [2024-07-15 18:42:09.618098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.217 [2024-07-15 18:42:09.627464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.217 [2024-07-15 18:42:09.627769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.217 [2024-07-15 18:42:09.627785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.217 [2024-07-15 18:42:09.627791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.217 [2024-07-15 18:42:09.627958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.217 [2024-07-15 18:42:09.628124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.217 [2024-07-15 18:42:09.628131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.217 [2024-07-15 18:42:09.628138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.217 [2024-07-15 18:42:09.630744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.217 [2024-07-15 18:42:09.640446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.217 [2024-07-15 18:42:09.640793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.217 [2024-07-15 18:42:09.640807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.217 [2024-07-15 18:42:09.640814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.217 [2024-07-15 18:42:09.640980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.217 [2024-07-15 18:42:09.641146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.217 [2024-07-15 18:42:09.641153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.217 [2024-07-15 18:42:09.641159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.217 [2024-07-15 18:42:09.643875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.217 [2024-07-15 18:42:09.653304] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.217 [2024-07-15 18:42:09.653735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.217 [2024-07-15 18:42:09.653750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.217 [2024-07-15 18:42:09.653757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.217 [2024-07-15 18:42:09.653923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.217 [2024-07-15 18:42:09.654089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.217 [2024-07-15 18:42:09.654099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.217 [2024-07-15 18:42:09.654105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.217 [2024-07-15 18:42:09.656769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.217 [2024-07-15 18:42:09.666204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.217 [2024-07-15 18:42:09.666627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.217 [2024-07-15 18:42:09.666643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.217 [2024-07-15 18:42:09.666649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.217 [2024-07-15 18:42:09.666815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.217 [2024-07-15 18:42:09.666981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.217 [2024-07-15 18:42:09.666989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.217 [2024-07-15 18:42:09.666995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.217 [2024-07-15 18:42:09.669662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.217 [2024-07-15 18:42:09.679047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.217 [2024-07-15 18:42:09.679469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.217 [2024-07-15 18:42:09.679485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.217 [2024-07-15 18:42:09.679491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.217 [2024-07-15 18:42:09.679662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.217 [2024-07-15 18:42:09.679833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.217 [2024-07-15 18:42:09.679841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.217 [2024-07-15 18:42:09.679847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.217 [2024-07-15 18:42:09.682594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.217 [2024-07-15 18:42:09.691973] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.217 [2024-07-15 18:42:09.692353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.218 [2024-07-15 18:42:09.692395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.218 [2024-07-15 18:42:09.692416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.218 [2024-07-15 18:42:09.692944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.218 [2024-07-15 18:42:09.693116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.218 [2024-07-15 18:42:09.693124] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.218 [2024-07-15 18:42:09.693130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.218 [2024-07-15 18:42:09.695841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.218 [2024-07-15 18:42:09.704875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.218 [2024-07-15 18:42:09.705267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.218 [2024-07-15 18:42:09.705282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.218 [2024-07-15 18:42:09.705289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.218 [2024-07-15 18:42:09.705461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.218 [2024-07-15 18:42:09.705628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.218 [2024-07-15 18:42:09.705636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.218 [2024-07-15 18:42:09.705641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.218 [2024-07-15 18:42:09.708302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.218 [2024-07-15 18:42:09.717619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.218 [2024-07-15 18:42:09.717945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.218 [2024-07-15 18:42:09.717960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.218 [2024-07-15 18:42:09.717966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.218 [2024-07-15 18:42:09.718132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.218 [2024-07-15 18:42:09.718299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.218 [2024-07-15 18:42:09.718307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.218 [2024-07-15 18:42:09.718313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.218 [2024-07-15 18:42:09.720920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.218 [2024-07-15 18:42:09.730412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.218 [2024-07-15 18:42:09.730754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.218 [2024-07-15 18:42:09.730769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.218 [2024-07-15 18:42:09.730776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.218 [2024-07-15 18:42:09.730942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.218 [2024-07-15 18:42:09.731109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.218 [2024-07-15 18:42:09.731117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.218 [2024-07-15 18:42:09.731122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.218 [2024-07-15 18:42:09.733730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.218 [2024-07-15 18:42:09.743128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.218 [2024-07-15 18:42:09.743407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.218 [2024-07-15 18:42:09.743422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.218 [2024-07-15 18:42:09.743428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.218 [2024-07-15 18:42:09.743598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.218 [2024-07-15 18:42:09.743764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.218 [2024-07-15 18:42:09.743771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.218 [2024-07-15 18:42:09.743777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.218 [2024-07-15 18:42:09.746602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.218 [2024-07-15 18:42:09.756216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.218 [2024-07-15 18:42:09.756644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.218 [2024-07-15 18:42:09.756686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.218 [2024-07-15 18:42:09.756708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.218 [2024-07-15 18:42:09.757228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.218 [2024-07-15 18:42:09.757401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.218 [2024-07-15 18:42:09.757409] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.218 [2024-07-15 18:42:09.757415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.218 [2024-07-15 18:42:09.760021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.218 [2024-07-15 18:42:09.769119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.218 [2024-07-15 18:42:09.769472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.218 [2024-07-15 18:42:09.769487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.218 [2024-07-15 18:42:09.769494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.218 [2024-07-15 18:42:09.769665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.218 [2024-07-15 18:42:09.769840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.218 [2024-07-15 18:42:09.769848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.218 [2024-07-15 18:42:09.769854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.218 [2024-07-15 18:42:09.772600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.477 [2024-07-15 18:42:09.781935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.477 [2024-07-15 18:42:09.782214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.477 [2024-07-15 18:42:09.782229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.477 [2024-07-15 18:42:09.782235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.477 [2024-07-15 18:42:09.782407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.477 [2024-07-15 18:42:09.782574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.477 [2024-07-15 18:42:09.782581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.477 [2024-07-15 18:42:09.782591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.477 [2024-07-15 18:42:09.785189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.477 [2024-07-15 18:42:09.794766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.477 [2024-07-15 18:42:09.795057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.477 [2024-07-15 18:42:09.795072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.477 [2024-07-15 18:42:09.795079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.477 [2024-07-15 18:42:09.795245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.477 [2024-07-15 18:42:09.795416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.477 [2024-07-15 18:42:09.795424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.477 [2024-07-15 18:42:09.795430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.477 [2024-07-15 18:42:09.798031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.477 [2024-07-15 18:42:09.807507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.477 [2024-07-15 18:42:09.807795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.477 [2024-07-15 18:42:09.807809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.477 [2024-07-15 18:42:09.807815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.477 [2024-07-15 18:42:09.807981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.477 [2024-07-15 18:42:09.808148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.477 [2024-07-15 18:42:09.808155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.477 [2024-07-15 18:42:09.808161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.477 [2024-07-15 18:42:09.810767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.477 [2024-07-15 18:42:09.820244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.477 [2024-07-15 18:42:09.820592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.477 [2024-07-15 18:42:09.820608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.477 [2024-07-15 18:42:09.820614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.477 [2024-07-15 18:42:09.820780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.477 [2024-07-15 18:42:09.820947] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.477 [2024-07-15 18:42:09.820954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.477 [2024-07-15 18:42:09.820960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.477 [2024-07-15 18:42:09.823570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.477 [2024-07-15 18:42:09.833047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.477 [2024-07-15 18:42:09.833384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.477 [2024-07-15 18:42:09.833399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.477 [2024-07-15 18:42:09.833405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.477 [2024-07-15 18:42:09.833572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.477 [2024-07-15 18:42:09.833738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.477 [2024-07-15 18:42:09.833746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.477 [2024-07-15 18:42:09.833752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.477 [2024-07-15 18:42:09.836428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.477 [2024-07-15 18:42:09.845827] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.477 [2024-07-15 18:42:09.846158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.477 [2024-07-15 18:42:09.846174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.478 [2024-07-15 18:42:09.846181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.478 [2024-07-15 18:42:09.846352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.478 [2024-07-15 18:42:09.846519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.478 [2024-07-15 18:42:09.846527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.478 [2024-07-15 18:42:09.846532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.478 [2024-07-15 18:42:09.849211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.478 [2024-07-15 18:42:09.858798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.478 [2024-07-15 18:42:09.859193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.478 [2024-07-15 18:42:09.859208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.478 [2024-07-15 18:42:09.859215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.478 [2024-07-15 18:42:09.859385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.478 [2024-07-15 18:42:09.859552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.478 [2024-07-15 18:42:09.859559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.478 [2024-07-15 18:42:09.859565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.478 [2024-07-15 18:42:09.862224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.478 [2024-07-15 18:42:09.871730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.478 [2024-07-15 18:42:09.872053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.478 [2024-07-15 18:42:09.872068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.478 [2024-07-15 18:42:09.872075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.478 [2024-07-15 18:42:09.872241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.478 [2024-07-15 18:42:09.872417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.478 [2024-07-15 18:42:09.872426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.478 [2024-07-15 18:42:09.872431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.478 [2024-07-15 18:42:09.875097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.478 [2024-07-15 18:42:09.884576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.478 [2024-07-15 18:42:09.884888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.478 [2024-07-15 18:42:09.884903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.478 [2024-07-15 18:42:09.884910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.478 [2024-07-15 18:42:09.885067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.478 [2024-07-15 18:42:09.885224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.478 [2024-07-15 18:42:09.885232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.478 [2024-07-15 18:42:09.885237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.478 [2024-07-15 18:42:09.887856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.478 [2024-07-15 18:42:09.897333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.478 [2024-07-15 18:42:09.897698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.478 [2024-07-15 18:42:09.897713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.478 [2024-07-15 18:42:09.897719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.478 [2024-07-15 18:42:09.897885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.478 [2024-07-15 18:42:09.898051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.478 [2024-07-15 18:42:09.898058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.478 [2024-07-15 18:42:09.898064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.478 [2024-07-15 18:42:09.900672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.478 [2024-07-15 18:42:09.910148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.478 [2024-07-15 18:42:09.910422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.478 [2024-07-15 18:42:09.910437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.478 [2024-07-15 18:42:09.910443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.478 [2024-07-15 18:42:09.910609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.478 [2024-07-15 18:42:09.910776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.478 [2024-07-15 18:42:09.910783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.478 [2024-07-15 18:42:09.910789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.478 [2024-07-15 18:42:09.913397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.478 [2024-07-15 18:42:09.922962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.478 [2024-07-15 18:42:09.923322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.478 [2024-07-15 18:42:09.923350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.478 [2024-07-15 18:42:09.923358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.478 [2024-07-15 18:42:09.923524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.478 [2024-07-15 18:42:09.923690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.478 [2024-07-15 18:42:09.923698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.478 [2024-07-15 18:42:09.923704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.478 [2024-07-15 18:42:09.926305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.478 [2024-07-15 18:42:09.935783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.478 [2024-07-15 18:42:09.936193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.478 [2024-07-15 18:42:09.936208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.478 [2024-07-15 18:42:09.936214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.478 [2024-07-15 18:42:09.936389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.478 [2024-07-15 18:42:09.936561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.478 [2024-07-15 18:42:09.936568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.478 [2024-07-15 18:42:09.936574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.478 [2024-07-15 18:42:09.939314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.478 [2024-07-15 18:42:09.948789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.478 [2024-07-15 18:42:09.949206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.478 [2024-07-15 18:42:09.949221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.478 [2024-07-15 18:42:09.949227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.478 [2024-07-15 18:42:09.949404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.478 [2024-07-15 18:42:09.949585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.478 [2024-07-15 18:42:09.949593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.478 [2024-07-15 18:42:09.949599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.478 [2024-07-15 18:42:09.952258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.478 [2024-07-15 18:42:09.961699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.478 [2024-07-15 18:42:09.962104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.478 [2024-07-15 18:42:09.962153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.478 [2024-07-15 18:42:09.962176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.478 [2024-07-15 18:42:09.962679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.478 [2024-07-15 18:42:09.962847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.478 [2024-07-15 18:42:09.962854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.478 [2024-07-15 18:42:09.962860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.478 [2024-07-15 18:42:09.965532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.478 [2024-07-15 18:42:09.974413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.478 [2024-07-15 18:42:09.974774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.478 [2024-07-15 18:42:09.974788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.478 [2024-07-15 18:42:09.974795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.478 [2024-07-15 18:42:09.974961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.478 [2024-07-15 18:42:09.975127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.478 [2024-07-15 18:42:09.975135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.478 [2024-07-15 18:42:09.975141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.478 [2024-07-15 18:42:09.977745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.478 [2024-07-15 18:42:09.987219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.478 [2024-07-15 18:42:09.987667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.478 [2024-07-15 18:42:09.987709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.479 [2024-07-15 18:42:09.987730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.479 [2024-07-15 18:42:09.988308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.479 [2024-07-15 18:42:09.988787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.479 [2024-07-15 18:42:09.988795] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.479 [2024-07-15 18:42:09.988801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.479 [2024-07-15 18:42:09.991405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.479 [2024-07-15 18:42:09.999982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.479 [2024-07-15 18:42:10.000416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.479 [2024-07-15 18:42:10.000431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.479 [2024-07-15 18:42:10.000438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.479 [2024-07-15 18:42:10.000605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.479 [2024-07-15 18:42:10.000777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.479 [2024-07-15 18:42:10.000785] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.479 [2024-07-15 18:42:10.000791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.479 [2024-07-15 18:42:10.003540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.479 [2024-07-15 18:42:10.012933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.479 [2024-07-15 18:42:10.013214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.479 [2024-07-15 18:42:10.013229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.479 [2024-07-15 18:42:10.013236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.479 [2024-07-15 18:42:10.013412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.479 [2024-07-15 18:42:10.013584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.479 [2024-07-15 18:42:10.013592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.479 [2024-07-15 18:42:10.013598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.479 [2024-07-15 18:42:10.016342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.479 [2024-07-15 18:42:10.026934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.479 [2024-07-15 18:42:10.027367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.479 [2024-07-15 18:42:10.027384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.479 [2024-07-15 18:42:10.027392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.479 [2024-07-15 18:42:10.027564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.479 [2024-07-15 18:42:10.027737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.479 [2024-07-15 18:42:10.027745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.479 [2024-07-15 18:42:10.027751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.479 [2024-07-15 18:42:10.030500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.738 [2024-07-15 18:42:10.040033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.738 [2024-07-15 18:42:10.040435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.738 [2024-07-15 18:42:10.040451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.738 [2024-07-15 18:42:10.040458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.738 [2024-07-15 18:42:10.040630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.738 [2024-07-15 18:42:10.040802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.738 [2024-07-15 18:42:10.040810] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.738 [2024-07-15 18:42:10.040817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.738 [2024-07-15 18:42:10.043569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.738 [2024-07-15 18:42:10.052946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.738 [2024-07-15 18:42:10.053289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.738 [2024-07-15 18:42:10.053304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.738 [2024-07-15 18:42:10.053311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.738 [2024-07-15 18:42:10.053482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.738 [2024-07-15 18:42:10.053649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.738 [2024-07-15 18:42:10.053656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.738 [2024-07-15 18:42:10.053662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.738 [2024-07-15 18:42:10.056370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.738 [2024-07-15 18:42:10.065949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.738 [2024-07-15 18:42:10.066298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.738 [2024-07-15 18:42:10.066314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.738 [2024-07-15 18:42:10.066321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.738 [2024-07-15 18:42:10.066499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.739 [2024-07-15 18:42:10.066671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.739 [2024-07-15 18:42:10.066678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.739 [2024-07-15 18:42:10.066684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.739 [2024-07-15 18:42:10.069428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.739 [2024-07-15 18:42:10.078972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.739 [2024-07-15 18:42:10.079304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.739 [2024-07-15 18:42:10.079319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.739 [2024-07-15 18:42:10.079326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.739 [2024-07-15 18:42:10.079505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.739 [2024-07-15 18:42:10.079677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.739 [2024-07-15 18:42:10.079685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.739 [2024-07-15 18:42:10.079692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.739 [2024-07-15 18:42:10.082436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.739 [2024-07-15 18:42:10.091990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.739 [2024-07-15 18:42:10.092333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.739 [2024-07-15 18:42:10.092354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.739 [2024-07-15 18:42:10.092364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.739 [2024-07-15 18:42:10.092536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.739 [2024-07-15 18:42:10.092708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.739 [2024-07-15 18:42:10.092716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.739 [2024-07-15 18:42:10.092722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.739 [2024-07-15 18:42:10.095465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.739 [2024-07-15 18:42:10.105010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.739 [2024-07-15 18:42:10.105360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.739 [2024-07-15 18:42:10.105375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.739 [2024-07-15 18:42:10.105381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.739 [2024-07-15 18:42:10.105553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.739 [2024-07-15 18:42:10.105724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.739 [2024-07-15 18:42:10.105731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.739 [2024-07-15 18:42:10.105738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.739 [2024-07-15 18:42:10.108479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.739 [2024-07-15 18:42:10.117939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.739 [2024-07-15 18:42:10.118363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.739 [2024-07-15 18:42:10.118379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.739 [2024-07-15 18:42:10.118386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.739 [2024-07-15 18:42:10.118557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.739 [2024-07-15 18:42:10.118732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.739 [2024-07-15 18:42:10.118739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.739 [2024-07-15 18:42:10.118745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.739 [2024-07-15 18:42:10.121492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.739 [2024-07-15 18:42:10.130882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.739 [2024-07-15 18:42:10.131304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.739 [2024-07-15 18:42:10.131320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.739 [2024-07-15 18:42:10.131326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.739 [2024-07-15 18:42:10.131503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.739 [2024-07-15 18:42:10.131675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.739 [2024-07-15 18:42:10.131686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.739 [2024-07-15 18:42:10.131692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.739 [2024-07-15 18:42:10.134436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.739 [2024-07-15 18:42:10.143828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.739 [2024-07-15 18:42:10.144164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.739 [2024-07-15 18:42:10.144180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.739 [2024-07-15 18:42:10.144186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.739 [2024-07-15 18:42:10.144363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.739 [2024-07-15 18:42:10.144534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.739 [2024-07-15 18:42:10.144542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.739 [2024-07-15 18:42:10.144549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.739 [2024-07-15 18:42:10.147291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.739 [2024-07-15 18:42:10.156759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.739 [2024-07-15 18:42:10.157184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.739 [2024-07-15 18:42:10.157199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.739 [2024-07-15 18:42:10.157206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.739 [2024-07-15 18:42:10.157382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.739 [2024-07-15 18:42:10.157553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.739 [2024-07-15 18:42:10.157562] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.739 [2024-07-15 18:42:10.157567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.739 [2024-07-15 18:42:10.160274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.739 [2024-07-15 18:42:10.169825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.739 [2024-07-15 18:42:10.170171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.739 [2024-07-15 18:42:10.170186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.739 [2024-07-15 18:42:10.170192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.739 [2024-07-15 18:42:10.170367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.739 [2024-07-15 18:42:10.170541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.739 [2024-07-15 18:42:10.170549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.739 [2024-07-15 18:42:10.170555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.739 [2024-07-15 18:42:10.173297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.739 [2024-07-15 18:42:10.182859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.739 [2024-07-15 18:42:10.183257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.739 [2024-07-15 18:42:10.183273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.739 [2024-07-15 18:42:10.183279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.739 [2024-07-15 18:42:10.183455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.739 [2024-07-15 18:42:10.183626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.739 [2024-07-15 18:42:10.183634] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.739 [2024-07-15 18:42:10.183640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.739 [2024-07-15 18:42:10.186384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.739 [2024-07-15 18:42:10.195942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.739 [2024-07-15 18:42:10.196269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.739 [2024-07-15 18:42:10.196285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.739 [2024-07-15 18:42:10.196291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.739 [2024-07-15 18:42:10.196467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.739 [2024-07-15 18:42:10.196639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.739 [2024-07-15 18:42:10.196647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.739 [2024-07-15 18:42:10.196653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.739 [2024-07-15 18:42:10.199546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.739 [2024-07-15 18:42:10.208954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.739 [2024-07-15 18:42:10.209236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.739 [2024-07-15 18:42:10.209250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.739 [2024-07-15 18:42:10.209257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.740 [2024-07-15 18:42:10.209434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.740 [2024-07-15 18:42:10.209605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.740 [2024-07-15 18:42:10.209613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.740 [2024-07-15 18:42:10.209620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.740 [2024-07-15 18:42:10.212365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.740 [2024-07-15 18:42:10.221918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.740 [2024-07-15 18:42:10.222347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.740 [2024-07-15 18:42:10.222362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.740 [2024-07-15 18:42:10.222369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.740 [2024-07-15 18:42:10.222544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.740 [2024-07-15 18:42:10.222715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.740 [2024-07-15 18:42:10.222723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.740 [2024-07-15 18:42:10.222729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.740 [2024-07-15 18:42:10.225482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.740 [2024-07-15 18:42:10.234876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.740 [2024-07-15 18:42:10.235127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.740 [2024-07-15 18:42:10.235143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.740 [2024-07-15 18:42:10.235150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.740 [2024-07-15 18:42:10.235321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.740 [2024-07-15 18:42:10.235498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.740 [2024-07-15 18:42:10.235506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.740 [2024-07-15 18:42:10.235512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.740 [2024-07-15 18:42:10.238255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.740 [2024-07-15 18:42:10.247920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.740 [2024-07-15 18:42:10.248206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.740 [2024-07-15 18:42:10.248221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.740 [2024-07-15 18:42:10.248227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.740 [2024-07-15 18:42:10.248403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.740 [2024-07-15 18:42:10.248574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.740 [2024-07-15 18:42:10.248582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.740 [2024-07-15 18:42:10.248587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.740 [2024-07-15 18:42:10.251329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.740 [2024-07-15 18:42:10.260953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.740 [2024-07-15 18:42:10.261235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.740 [2024-07-15 18:42:10.261251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.740 [2024-07-15 18:42:10.261258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.740 [2024-07-15 18:42:10.261435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.740 [2024-07-15 18:42:10.261607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.740 [2024-07-15 18:42:10.261615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.740 [2024-07-15 18:42:10.261624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.740 [2024-07-15 18:42:10.264371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.740 [2024-07-15 18:42:10.273911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.740 [2024-07-15 18:42:10.274252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.740 [2024-07-15 18:42:10.274267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.740 [2024-07-15 18:42:10.274274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.740 [2024-07-15 18:42:10.274450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.740 [2024-07-15 18:42:10.274621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.740 [2024-07-15 18:42:10.274628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.740 [2024-07-15 18:42:10.274635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.740 [2024-07-15 18:42:10.277373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.740 [2024-07-15 18:42:10.286951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:24.740 [2024-07-15 18:42:10.287283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.740 [2024-07-15 18:42:10.287298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:24.740 [2024-07-15 18:42:10.287305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:24.740 [2024-07-15 18:42:10.287481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:24.740 [2024-07-15 18:42:10.287653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:24.740 [2024-07-15 18:42:10.287661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:24.740 [2024-07-15 18:42:10.287666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:24.740 [2024-07-15 18:42:10.290414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.000 [2024-07-15 18:42:10.299965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.000 [2024-07-15 18:42:10.300296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.000 [2024-07-15 18:42:10.300311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.000 [2024-07-15 18:42:10.300318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.000 [2024-07-15 18:42:10.300496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.000 [2024-07-15 18:42:10.300667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.000 [2024-07-15 18:42:10.300675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.000 [2024-07-15 18:42:10.300681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.000 [2024-07-15 18:42:10.303425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.000 [2024-07-15 18:42:10.312887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.000 [2024-07-15 18:42:10.313191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.000 [2024-07-15 18:42:10.313210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.000 [2024-07-15 18:42:10.313216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.000 [2024-07-15 18:42:10.313394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.000 [2024-07-15 18:42:10.313565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.000 [2024-07-15 18:42:10.313573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.000 [2024-07-15 18:42:10.313579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.000 [2024-07-15 18:42:10.316287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.000 [2024-07-15 18:42:10.325962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.000 [2024-07-15 18:42:10.326244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.000 [2024-07-15 18:42:10.326259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.000 [2024-07-15 18:42:10.326266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.000 [2024-07-15 18:42:10.326443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.000 [2024-07-15 18:42:10.326614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.000 [2024-07-15 18:42:10.326622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.000 [2024-07-15 18:42:10.326628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.000 [2024-07-15 18:42:10.329369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.000 [2024-07-15 18:42:10.338926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.000 [2024-07-15 18:42:10.339251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.000 [2024-07-15 18:42:10.339266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.000 [2024-07-15 18:42:10.339273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.000 [2024-07-15 18:42:10.339449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.000 [2024-07-15 18:42:10.339620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.000 [2024-07-15 18:42:10.339628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.000 [2024-07-15 18:42:10.339634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.001 [2024-07-15 18:42:10.342350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.001 [2024-07-15 18:42:10.351848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.001 [2024-07-15 18:42:10.352200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.001 [2024-07-15 18:42:10.352215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.001 [2024-07-15 18:42:10.352222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.001 [2024-07-15 18:42:10.352398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.001 [2024-07-15 18:42:10.352573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.001 [2024-07-15 18:42:10.352582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.001 [2024-07-15 18:42:10.352588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.001 [2024-07-15 18:42:10.355298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.001 [2024-07-15 18:42:10.364840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.001 [2024-07-15 18:42:10.365140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.001 [2024-07-15 18:42:10.365155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.001 [2024-07-15 18:42:10.365162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.001 [2024-07-15 18:42:10.365333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.001 [2024-07-15 18:42:10.365510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.001 [2024-07-15 18:42:10.365518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.001 [2024-07-15 18:42:10.365524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.001 [2024-07-15 18:42:10.368231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.001 [2024-07-15 18:42:10.377794] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.001 [2024-07-15 18:42:10.378094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.001 [2024-07-15 18:42:10.378110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.001 [2024-07-15 18:42:10.378117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.001 [2024-07-15 18:42:10.378287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.001 [2024-07-15 18:42:10.378464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.001 [2024-07-15 18:42:10.378472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.001 [2024-07-15 18:42:10.378478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.001 [2024-07-15 18:42:10.381190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 4069194 Killed "${NVMF_APP[@]}" "$@" 00:27:25.001 18:42:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:25.001 18:42:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:25.001 18:42:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:25.001 18:42:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:25.001 18:42:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:25.001 18:42:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=4071009 00:27:25.001 18:42:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 4071009 00:27:25.001 18:42:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:25.001 [2024-07-15 18:42:10.390765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.001 18:42:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 4071009 ']' 00:27:25.001 18:42:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.001 [2024-07-15 18:42:10.391190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.001 [2024-07-15 18:42:10.391206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.001 [2024-07-15 18:42:10.391213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.001 [2024-07-15 18:42:10.391388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.001 18:42:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:25.001 [2024-07-15 18:42:10.391561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.001 [2024-07-15 18:42:10.391569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.001 [2024-07-15 18:42:10.391575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.001 18:42:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.001 18:42:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:25.001 18:42:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:25.001 [2024-07-15 18:42:10.394319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.001 [2024-07-15 18:42:10.403715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.001 [2024-07-15 18:42:10.403998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.001 [2024-07-15 18:42:10.404013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.001 [2024-07-15 18:42:10.404020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.001 [2024-07-15 18:42:10.404191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.001 [2024-07-15 18:42:10.404368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.001 [2024-07-15 18:42:10.404377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.001 [2024-07-15 18:42:10.404383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.001 [2024-07-15 18:42:10.407122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.001 [2024-07-15 18:42:10.416682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.001 [2024-07-15 18:42:10.417010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.001 [2024-07-15 18:42:10.417025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.001 [2024-07-15 18:42:10.417032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.001 [2024-07-15 18:42:10.417203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.001 [2024-07-15 18:42:10.417380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.001 [2024-07-15 18:42:10.417388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.001 [2024-07-15 18:42:10.417394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.001 [2024-07-15 18:42:10.420134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.001 [2024-07-15 18:42:10.429710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.001 [2024-07-15 18:42:10.430060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.001 [2024-07-15 18:42:10.430076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.001 [2024-07-15 18:42:10.430082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.001 [2024-07-15 18:42:10.430254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.001 [2024-07-15 18:42:10.430431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.001 [2024-07-15 18:42:10.430440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.001 [2024-07-15 18:42:10.430446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.001 [2024-07-15 18:42:10.433188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.001 [2024-07-15 18:42:10.436004] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:27:25.001 [2024-07-15 18:42:10.436044] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:25.001 [2024-07-15 18:42:10.442749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.001 [2024-07-15 18:42:10.443124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.001 [2024-07-15 18:42:10.443139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.001 [2024-07-15 18:42:10.443146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.001 [2024-07-15 18:42:10.443318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.001 [2024-07-15 18:42:10.443495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.001 [2024-07-15 18:42:10.443504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.001 [2024-07-15 18:42:10.443510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.001 [2024-07-15 18:42:10.446380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.001 [2024-07-15 18:42:10.455778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.001 [2024-07-15 18:42:10.456159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.001 [2024-07-15 18:42:10.456174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.001 [2024-07-15 18:42:10.456181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.001 [2024-07-15 18:42:10.456359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.001 [2024-07-15 18:42:10.456532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.001 [2024-07-15 18:42:10.456539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.001 [2024-07-15 18:42:10.456545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.001 [2024-07-15 18:42:10.459289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.001 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.002 [2024-07-15 18:42:10.468870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.002 [2024-07-15 18:42:10.469249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.002 [2024-07-15 18:42:10.469264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.002 [2024-07-15 18:42:10.469271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.002 [2024-07-15 18:42:10.469448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.002 [2024-07-15 18:42:10.469619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.002 [2024-07-15 18:42:10.469627] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.002 [2024-07-15 18:42:10.469633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.002 [2024-07-15 18:42:10.472381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.002 [2024-07-15 18:42:10.481939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.002 [2024-07-15 18:42:10.482342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.002 [2024-07-15 18:42:10.482357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.002 [2024-07-15 18:42:10.482364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.002 [2024-07-15 18:42:10.482535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.002 [2024-07-15 18:42:10.482707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.002 [2024-07-15 18:42:10.482715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.002 [2024-07-15 18:42:10.482721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.002 [2024-07-15 18:42:10.485470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.002 [2024-07-15 18:42:10.495023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.002 [2024-07-15 18:42:10.495297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.002 [2024-07-15 18:42:10.495312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.002 [2024-07-15 18:42:10.495319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.002 [2024-07-15 18:42:10.495496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.002 [2024-07-15 18:42:10.495668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.002 [2024-07-15 18:42:10.495675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.002 [2024-07-15 18:42:10.495682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.002 [2024-07-15 18:42:10.498431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.002 [2024-07-15 18:42:10.507049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:25.002 [2024-07-15 18:42:10.507986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.002 [2024-07-15 18:42:10.508267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.002 [2024-07-15 18:42:10.508282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.002 [2024-07-15 18:42:10.508292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.002 [2024-07-15 18:42:10.508470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.002 [2024-07-15 18:42:10.508643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.002 [2024-07-15 18:42:10.508651] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.002 [2024-07-15 18:42:10.508657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.002 [2024-07-15 18:42:10.511402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.002 [2024-07-15 18:42:10.521014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.002 [2024-07-15 18:42:10.521322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.002 [2024-07-15 18:42:10.521344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.002 [2024-07-15 18:42:10.521352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.002 [2024-07-15 18:42:10.521524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.002 [2024-07-15 18:42:10.521695] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.002 [2024-07-15 18:42:10.521703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.002 [2024-07-15 18:42:10.521709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.002 [2024-07-15 18:42:10.524431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.002 [2024-07-15 18:42:10.533997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.002 [2024-07-15 18:42:10.534386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.002 [2024-07-15 18:42:10.534403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.002 [2024-07-15 18:42:10.534410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.002 [2024-07-15 18:42:10.534582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.002 [2024-07-15 18:42:10.534757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.002 [2024-07-15 18:42:10.534765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.002 [2024-07-15 18:42:10.534772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.002 [2024-07-15 18:42:10.537485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.002 [2024-07-15 18:42:10.546993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.002 [2024-07-15 18:42:10.547379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.002 [2024-07-15 18:42:10.547396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.002 [2024-07-15 18:42:10.547403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.002 [2024-07-15 18:42:10.547575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.002 [2024-07-15 18:42:10.547749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.002 [2024-07-15 18:42:10.547763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.002 [2024-07-15 18:42:10.547769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.002 [2024-07-15 18:42:10.550486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.263 [2024-07-15 18:42:10.560033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.263 [2024-07-15 18:42:10.560387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-15 18:42:10.560403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.263 [2024-07-15 18:42:10.560410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.263 [2024-07-15 18:42:10.560582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.263 [2024-07-15 18:42:10.560753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.263 [2024-07-15 18:42:10.560762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.263 [2024-07-15 18:42:10.560768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.263 [2024-07-15 18:42:10.563516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.263 [2024-07-15 18:42:10.573074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.263 [2024-07-15 18:42:10.573460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-15 18:42:10.573477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.263 [2024-07-15 18:42:10.573484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.263 [2024-07-15 18:42:10.573656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.263 [2024-07-15 18:42:10.573828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.263 [2024-07-15 18:42:10.573836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.263 [2024-07-15 18:42:10.573843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.263 [2024-07-15 18:42:10.576587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.263 [2024-07-15 18:42:10.586013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.263 [2024-07-15 18:42:10.586427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-15 18:42:10.586443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.263 [2024-07-15 18:42:10.586450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.263 [2024-07-15 18:42:10.586623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.263 [2024-07-15 18:42:10.586795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.263 [2024-07-15 18:42:10.586803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.263 [2024-07-15 18:42:10.586809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.263 [2024-07-15 18:42:10.587014] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:25.263 [2024-07-15 18:42:10.587041] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:25.263 [2024-07-15 18:42:10.587048] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:25.263 [2024-07-15 18:42:10.587054] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:25.263 [2024-07-15 18:42:10.587059] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:25.263 [2024-07-15 18:42:10.587104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:25.263 [2024-07-15 18:42:10.587211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.263 [2024-07-15 18:42:10.587212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:25.263 [2024-07-15 18:42:10.589557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.263 [2024-07-15 18:42:10.599120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.263 [2024-07-15 18:42:10.599538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-15 18:42:10.599556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.263 [2024-07-15 18:42:10.599564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.263 [2024-07-15 18:42:10.599736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.263 [2024-07-15 18:42:10.599908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.263 [2024-07-15 18:42:10.599916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.263 [2024-07-15 18:42:10.599922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.263 [2024-07-15 18:42:10.602667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.263 [2024-07-15 18:42:10.612226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.263 [2024-07-15 18:42:10.612671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-15 18:42:10.612689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.263 [2024-07-15 18:42:10.612696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.263 [2024-07-15 18:42:10.612869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.263 [2024-07-15 18:42:10.613041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.263 [2024-07-15 18:42:10.613049] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.263 [2024-07-15 18:42:10.613056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.263 [2024-07-15 18:42:10.615799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.263 [2024-07-15 18:42:10.625204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.263 [2024-07-15 18:42:10.625655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-15 18:42:10.625673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.263 [2024-07-15 18:42:10.625681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.263 [2024-07-15 18:42:10.625854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.263 [2024-07-15 18:42:10.626029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.263 [2024-07-15 18:42:10.626043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.263 [2024-07-15 18:42:10.626049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.263 [2024-07-15 18:42:10.628796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.263 [2024-07-15 18:42:10.638188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.263 [2024-07-15 18:42:10.638638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-15 18:42:10.638656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.263 [2024-07-15 18:42:10.638663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.263 [2024-07-15 18:42:10.638837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.263 [2024-07-15 18:42:10.639008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.263 [2024-07-15 18:42:10.639017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.263 [2024-07-15 18:42:10.639023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.263 [2024-07-15 18:42:10.641771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.263 [2024-07-15 18:42:10.651158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.263 [2024-07-15 18:42:10.651591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-15 18:42:10.651607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.264 [2024-07-15 18:42:10.651614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.264 [2024-07-15 18:42:10.651785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.264 [2024-07-15 18:42:10.651957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.264 [2024-07-15 18:42:10.651964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.264 [2024-07-15 18:42:10.651970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.264 [2024-07-15 18:42:10.654713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.264 [2024-07-15 18:42:10.664259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.264 [2024-07-15 18:42:10.664687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-15 18:42:10.664703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.264 [2024-07-15 18:42:10.664710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.264 [2024-07-15 18:42:10.664882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.264 [2024-07-15 18:42:10.665053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.264 [2024-07-15 18:42:10.665061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.264 [2024-07-15 18:42:10.665068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.264 [2024-07-15 18:42:10.667809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.264 [2024-07-15 18:42:10.677204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.264 [2024-07-15 18:42:10.677628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-15 18:42:10.677644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.264 [2024-07-15 18:42:10.677650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.264 [2024-07-15 18:42:10.677821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.264 [2024-07-15 18:42:10.677992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.264 [2024-07-15 18:42:10.678000] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.264 [2024-07-15 18:42:10.678006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.264 [2024-07-15 18:42:10.680752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.264 [2024-07-15 18:42:10.690302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.264 [2024-07-15 18:42:10.690720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-15 18:42:10.690735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.264 [2024-07-15 18:42:10.690742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.264 [2024-07-15 18:42:10.690914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.264 [2024-07-15 18:42:10.691088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.264 [2024-07-15 18:42:10.691096] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.264 [2024-07-15 18:42:10.691101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.264 [2024-07-15 18:42:10.693846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.264 [2024-07-15 18:42:10.703392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.264 [2024-07-15 18:42:10.703840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-15 18:42:10.703856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.264 [2024-07-15 18:42:10.703862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.264 [2024-07-15 18:42:10.704034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.264 [2024-07-15 18:42:10.704206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.264 [2024-07-15 18:42:10.704214] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.264 [2024-07-15 18:42:10.704220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.264 [2024-07-15 18:42:10.707029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.264 [2024-07-15 18:42:10.716418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.264 [2024-07-15 18:42:10.716835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-15 18:42:10.716850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.264 [2024-07-15 18:42:10.716861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.264 [2024-07-15 18:42:10.717033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.264 [2024-07-15 18:42:10.717204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.264 [2024-07-15 18:42:10.717211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.264 [2024-07-15 18:42:10.717217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.264 [2024-07-15 18:42:10.719960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.264 [2024-07-15 18:42:10.729514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.264 [2024-07-15 18:42:10.729840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-15 18:42:10.729855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.264 [2024-07-15 18:42:10.729861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.264 [2024-07-15 18:42:10.730032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.264 [2024-07-15 18:42:10.730204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.264 [2024-07-15 18:42:10.730211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.264 [2024-07-15 18:42:10.730218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.264 [2024-07-15 18:42:10.732959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.264 [2024-07-15 18:42:10.742498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.264 [2024-07-15 18:42:10.742900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-15 18:42:10.742915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.264 [2024-07-15 18:42:10.742922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.264 [2024-07-15 18:42:10.743094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.264 [2024-07-15 18:42:10.743265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.264 [2024-07-15 18:42:10.743273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.264 [2024-07-15 18:42:10.743279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.264 [2024-07-15 18:42:10.746022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.264 [2024-07-15 18:42:10.755567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.264 [2024-07-15 18:42:10.755964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-15 18:42:10.755979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.264 [2024-07-15 18:42:10.755985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.264 [2024-07-15 18:42:10.756156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.264 [2024-07-15 18:42:10.756327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.264 [2024-07-15 18:42:10.756335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.264 [2024-07-15 18:42:10.756351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.264 [2024-07-15 18:42:10.759288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.264 [2024-07-15 18:42:10.768508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.264 [2024-07-15 18:42:10.768910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-15 18:42:10.768926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.264 [2024-07-15 18:42:10.768933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.264 [2024-07-15 18:42:10.769104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.264 [2024-07-15 18:42:10.769277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.264 [2024-07-15 18:42:10.769284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.264 [2024-07-15 18:42:10.769291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.264 [2024-07-15 18:42:10.772035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.264 [2024-07-15 18:42:10.781586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.264 [2024-07-15 18:42:10.781988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-15 18:42:10.782003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.264 [2024-07-15 18:42:10.782010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.264 [2024-07-15 18:42:10.782181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.264 [2024-07-15 18:42:10.782356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.264 [2024-07-15 18:42:10.782364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.264 [2024-07-15 18:42:10.782370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.264 [2024-07-15 18:42:10.785108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.264 [2024-07-15 18:42:10.794656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.264 [2024-07-15 18:42:10.795060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-15 18:42:10.795074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.265 [2024-07-15 18:42:10.795081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.265 [2024-07-15 18:42:10.795253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.265 [2024-07-15 18:42:10.795429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.265 [2024-07-15 18:42:10.795437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.265 [2024-07-15 18:42:10.795443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.265 [2024-07-15 18:42:10.798180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.265 [2024-07-15 18:42:10.807724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.265 [2024-07-15 18:42:10.808105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-15 18:42:10.808119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.265 [2024-07-15 18:42:10.808126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.265 [2024-07-15 18:42:10.808297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.265 [2024-07-15 18:42:10.808473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.265 [2024-07-15 18:42:10.808482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.265 [2024-07-15 18:42:10.808488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.265 [2024-07-15 18:42:10.811225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.524 [2024-07-15 18:42:10.820765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.524 [2024-07-15 18:42:10.821163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.524 [2024-07-15 18:42:10.821179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.524 [2024-07-15 18:42:10.821186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.524 [2024-07-15 18:42:10.821361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.524 [2024-07-15 18:42:10.821533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.524 [2024-07-15 18:42:10.821541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.524 [2024-07-15 18:42:10.821546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.524 [2024-07-15 18:42:10.824288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.524 [2024-07-15 18:42:10.833838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.524 [2024-07-15 18:42:10.834235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.524 [2024-07-15 18:42:10.834250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.524 [2024-07-15 18:42:10.834257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.524 [2024-07-15 18:42:10.834432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.524 [2024-07-15 18:42:10.834604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.524 [2024-07-15 18:42:10.834612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.524 [2024-07-15 18:42:10.834618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.524 [2024-07-15 18:42:10.837361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.525 [2024-07-15 18:42:10.846901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.525 [2024-07-15 18:42:10.847301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.525 [2024-07-15 18:42:10.847316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.525 [2024-07-15 18:42:10.847323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.525 [2024-07-15 18:42:10.847501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.525 [2024-07-15 18:42:10.847673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.525 [2024-07-15 18:42:10.847680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.525 [2024-07-15 18:42:10.847686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.525 [2024-07-15 18:42:10.850425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.525 [2024-07-15 18:42:10.859962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.525 [2024-07-15 18:42:10.860358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.525 [2024-07-15 18:42:10.860374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.525 [2024-07-15 18:42:10.860381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.525 [2024-07-15 18:42:10.860552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.525 [2024-07-15 18:42:10.860728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.525 [2024-07-15 18:42:10.860735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.525 [2024-07-15 18:42:10.860741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.525 [2024-07-15 18:42:10.863483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.525 [2024-07-15 18:42:10.873037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.525 [2024-07-15 18:42:10.873463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.525 [2024-07-15 18:42:10.873478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.525 [2024-07-15 18:42:10.873487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.525 [2024-07-15 18:42:10.873660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.525 [2024-07-15 18:42:10.873831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.525 [2024-07-15 18:42:10.873839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.525 [2024-07-15 18:42:10.873845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.525 [2024-07-15 18:42:10.876589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.525 [2024-07-15 18:42:10.886137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.525 [2024-07-15 18:42:10.886545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.525 [2024-07-15 18:42:10.886560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.525 [2024-07-15 18:42:10.886566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.525 [2024-07-15 18:42:10.886738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.525 [2024-07-15 18:42:10.886909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.525 [2024-07-15 18:42:10.886917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.525 [2024-07-15 18:42:10.886926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.525 [2024-07-15 18:42:10.889668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.525 [2024-07-15 18:42:10.899211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.525 [2024-07-15 18:42:10.899612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.525 [2024-07-15 18:42:10.899627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.525 [2024-07-15 18:42:10.899634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.525 [2024-07-15 18:42:10.899805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.525 [2024-07-15 18:42:10.899977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.525 [2024-07-15 18:42:10.899985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.525 [2024-07-15 18:42:10.899991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.525 [2024-07-15 18:42:10.902734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.525 [2024-07-15 18:42:10.912275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.525 [2024-07-15 18:42:10.912560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.525 [2024-07-15 18:42:10.912575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.525 [2024-07-15 18:42:10.912582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.525 [2024-07-15 18:42:10.912753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.525 [2024-07-15 18:42:10.912924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.525 [2024-07-15 18:42:10.912931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.525 [2024-07-15 18:42:10.912937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.525 [2024-07-15 18:42:10.915679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.525 [2024-07-15 18:42:10.925225] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.525 [2024-07-15 18:42:10.925631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.525 [2024-07-15 18:42:10.925646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.525 [2024-07-15 18:42:10.925653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.525 [2024-07-15 18:42:10.925823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.525 [2024-07-15 18:42:10.925995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.525 [2024-07-15 18:42:10.926003] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.525 [2024-07-15 18:42:10.926009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.525 [2024-07-15 18:42:10.928749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.525 [2024-07-15 18:42:10.938293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.525 [2024-07-15 18:42:10.938698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.525 [2024-07-15 18:42:10.938716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.525 [2024-07-15 18:42:10.938723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.525 [2024-07-15 18:42:10.938894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.525 [2024-07-15 18:42:10.939067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.525 [2024-07-15 18:42:10.939074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.525 [2024-07-15 18:42:10.939080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.525 [2024-07-15 18:42:10.941823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.525 [2024-07-15 18:42:10.951371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.525 [2024-07-15 18:42:10.951744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.525 [2024-07-15 18:42:10.951759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.525 [2024-07-15 18:42:10.951765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.525 [2024-07-15 18:42:10.951937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.525 [2024-07-15 18:42:10.952107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.525 [2024-07-15 18:42:10.952115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.525 [2024-07-15 18:42:10.952121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.525 [2024-07-15 18:42:10.954864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.525 [2024-07-15 18:42:10.964409] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.525 [2024-07-15 18:42:10.964816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.525 [2024-07-15 18:42:10.964831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.525 [2024-07-15 18:42:10.964838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.525 [2024-07-15 18:42:10.965009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.525 [2024-07-15 18:42:10.965181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.525 [2024-07-15 18:42:10.965189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.525 [2024-07-15 18:42:10.965194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.525 [2024-07-15 18:42:10.967936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.525 [2024-07-15 18:42:10.977480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.525 [2024-07-15 18:42:10.977881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.525 [2024-07-15 18:42:10.977896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.525 [2024-07-15 18:42:10.977902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.525 [2024-07-15 18:42:10.978073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.525 [2024-07-15 18:42:10.978248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.526 [2024-07-15 18:42:10.978256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.526 [2024-07-15 18:42:10.978262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.526 [2024-07-15 18:42:10.981001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.526 [2024-07-15 18:42:10.990543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.526 [2024-07-15 18:42:10.990919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.526 [2024-07-15 18:42:10.990935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.526 [2024-07-15 18:42:10.990942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.526 [2024-07-15 18:42:10.991113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.526 [2024-07-15 18:42:10.991284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.526 [2024-07-15 18:42:10.991292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.526 [2024-07-15 18:42:10.991297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.526 [2024-07-15 18:42:10.994040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.526 [2024-07-15 18:42:11.003584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.526 [2024-07-15 18:42:11.003903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.526 [2024-07-15 18:42:11.003918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.526 [2024-07-15 18:42:11.003925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.526 [2024-07-15 18:42:11.004096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.526 [2024-07-15 18:42:11.004266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.526 [2024-07-15 18:42:11.004274] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.526 [2024-07-15 18:42:11.004280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.526 [2024-07-15 18:42:11.007022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.526 [2024-07-15 18:42:11.016563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.526 [2024-07-15 18:42:11.016963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.526 [2024-07-15 18:42:11.016978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.526 [2024-07-15 18:42:11.016984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.526 [2024-07-15 18:42:11.017155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.526 [2024-07-15 18:42:11.017331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.526 [2024-07-15 18:42:11.017345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.526 [2024-07-15 18:42:11.017351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.526 [2024-07-15 18:42:11.020090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.526 [2024-07-15 18:42:11.029636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.526 [2024-07-15 18:42:11.030037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.526 [2024-07-15 18:42:11.030051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.526 [2024-07-15 18:42:11.030058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.526 [2024-07-15 18:42:11.030229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.526 [2024-07-15 18:42:11.030407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.526 [2024-07-15 18:42:11.030415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.526 [2024-07-15 18:42:11.030421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.526 [2024-07-15 18:42:11.033160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.526 [2024-07-15 18:42:11.042709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.526 [2024-07-15 18:42:11.043059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.526 [2024-07-15 18:42:11.043074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.526 [2024-07-15 18:42:11.043081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.526 [2024-07-15 18:42:11.043254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.526 [2024-07-15 18:42:11.043433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.526 [2024-07-15 18:42:11.043441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.526 [2024-07-15 18:42:11.043447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.526 [2024-07-15 18:42:11.046183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.526 [2024-07-15 18:42:11.055727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.526 [2024-07-15 18:42:11.056103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.526 [2024-07-15 18:42:11.056117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.526 [2024-07-15 18:42:11.056124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.526 [2024-07-15 18:42:11.056295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.526 [2024-07-15 18:42:11.056471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.526 [2024-07-15 18:42:11.056480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.526 [2024-07-15 18:42:11.056485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.526 [2024-07-15 18:42:11.059224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.526 [2024-07-15 18:42:11.068773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.526 [2024-07-15 18:42:11.069195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.526 [2024-07-15 18:42:11.069210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.526 [2024-07-15 18:42:11.069220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.526 [2024-07-15 18:42:11.069397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.526 [2024-07-15 18:42:11.069569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.526 [2024-07-15 18:42:11.069578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.526 [2024-07-15 18:42:11.069585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.526 [2024-07-15 18:42:11.072324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.786 [2024-07-15 18:42:11.081866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.786 [2024-07-15 18:42:11.082269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.786 [2024-07-15 18:42:11.082284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.786 [2024-07-15 18:42:11.082291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.786 [2024-07-15 18:42:11.082466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.786 [2024-07-15 18:42:11.082638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.786 [2024-07-15 18:42:11.082645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.786 [2024-07-15 18:42:11.082651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.786 [2024-07-15 18:42:11.085396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.786 [2024-07-15 18:42:11.094945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.786 [2024-07-15 18:42:11.095374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.786 [2024-07-15 18:42:11.095390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.786 [2024-07-15 18:42:11.095398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.786 [2024-07-15 18:42:11.095569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.786 [2024-07-15 18:42:11.095741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.786 [2024-07-15 18:42:11.095749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.786 [2024-07-15 18:42:11.095755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.786 [2024-07-15 18:42:11.098499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.786 [2024-07-15 18:42:11.108042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.786 [2024-07-15 18:42:11.108457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.786 [2024-07-15 18:42:11.108472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.786 [2024-07-15 18:42:11.108479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.786 [2024-07-15 18:42:11.108651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.786 [2024-07-15 18:42:11.108822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.786 [2024-07-15 18:42:11.108832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.786 [2024-07-15 18:42:11.108838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.786 [2024-07-15 18:42:11.111584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.786 [2024-07-15 18:42:11.121127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.786 [2024-07-15 18:42:11.121551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.786 [2024-07-15 18:42:11.121566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.786 [2024-07-15 18:42:11.121573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.786 [2024-07-15 18:42:11.121744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.786 [2024-07-15 18:42:11.121919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.786 [2024-07-15 18:42:11.121927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.786 [2024-07-15 18:42:11.121932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.786 [2024-07-15 18:42:11.124682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.786 [2024-07-15 18:42:11.134228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.786 [2024-07-15 18:42:11.134637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.786 [2024-07-15 18:42:11.134652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.786 [2024-07-15 18:42:11.134659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.786 [2024-07-15 18:42:11.134831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.786 [2024-07-15 18:42:11.135002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.786 [2024-07-15 18:42:11.135009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.786 [2024-07-15 18:42:11.135015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.786 [2024-07-15 18:42:11.137756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.786 [2024-07-15 18:42:11.147302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.786 [2024-07-15 18:42:11.147731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.786 [2024-07-15 18:42:11.147746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.786 [2024-07-15 18:42:11.147753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.786 [2024-07-15 18:42:11.147924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.786 [2024-07-15 18:42:11.148095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.786 [2024-07-15 18:42:11.148103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.786 [2024-07-15 18:42:11.148109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.786 [2024-07-15 18:42:11.150851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.786 [2024-07-15 18:42:11.160243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.786 [2024-07-15 18:42:11.160580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.786 [2024-07-15 18:42:11.160595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.786 [2024-07-15 18:42:11.160602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.786 [2024-07-15 18:42:11.160773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.786 [2024-07-15 18:42:11.160947] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.786 [2024-07-15 18:42:11.160955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.786 [2024-07-15 18:42:11.160961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.786 [2024-07-15 18:42:11.163705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.786 [2024-07-15 18:42:11.173246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.787 [2024-07-15 18:42:11.173606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.787 [2024-07-15 18:42:11.173622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.787 [2024-07-15 18:42:11.173630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.787 [2024-07-15 18:42:11.173801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.787 [2024-07-15 18:42:11.173973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.787 [2024-07-15 18:42:11.173981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.787 [2024-07-15 18:42:11.173987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.787 [2024-07-15 18:42:11.176731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.787 [2024-07-15 18:42:11.186275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.787 [2024-07-15 18:42:11.186687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.787 [2024-07-15 18:42:11.186703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.787 [2024-07-15 18:42:11.186710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.787 [2024-07-15 18:42:11.186881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.787 [2024-07-15 18:42:11.187053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.787 [2024-07-15 18:42:11.187061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.787 [2024-07-15 18:42:11.187068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.787 [2024-07-15 18:42:11.189825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.787 [2024-07-15 18:42:11.199231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.787 [2024-07-15 18:42:11.199664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.787 [2024-07-15 18:42:11.199680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.787 [2024-07-15 18:42:11.199688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.787 [2024-07-15 18:42:11.199862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.787 [2024-07-15 18:42:11.200034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.787 [2024-07-15 18:42:11.200042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.787 [2024-07-15 18:42:11.200048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.787 [2024-07-15 18:42:11.202791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.787 [2024-07-15 18:42:11.212252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.787 [2024-07-15 18:42:11.212656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.787 [2024-07-15 18:42:11.212672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.787 [2024-07-15 18:42:11.212678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.787 [2024-07-15 18:42:11.212850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.787 [2024-07-15 18:42:11.213020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.787 [2024-07-15 18:42:11.213028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.787 [2024-07-15 18:42:11.213034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.787 [2024-07-15 18:42:11.215775] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.787 [2024-07-15 18:42:11.225326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.787 [2024-07-15 18:42:11.225718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.787 [2024-07-15 18:42:11.225733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.787 [2024-07-15 18:42:11.225740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.787 [2024-07-15 18:42:11.225911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.787 [2024-07-15 18:42:11.226086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.787 [2024-07-15 18:42:11.226094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.787 [2024-07-15 18:42:11.226100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.787 [2024-07-15 18:42:11.228843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.787 [2024-07-15 18:42:11.238385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.787 [2024-07-15 18:42:11.238796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.787 [2024-07-15 18:42:11.238811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.787 [2024-07-15 18:42:11.238818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.787 [2024-07-15 18:42:11.238989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.787 [2024-07-15 18:42:11.239160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.787 [2024-07-15 18:42:11.239168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.787 [2024-07-15 18:42:11.239178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.787 [2024-07-15 18:42:11.241920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.787 18:42:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:25.787 18:42:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:27:25.787 18:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:25.787 18:42:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:25.787 18:42:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:25.787 [2024-07-15 18:42:11.251499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.787 [2024-07-15 18:42:11.251972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.787 [2024-07-15 18:42:11.251989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.787 [2024-07-15 18:42:11.251997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.787 [2024-07-15 18:42:11.252179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.787 [2024-07-15 18:42:11.252367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.787 [2024-07-15 18:42:11.252375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.787 [2024-07-15 18:42:11.252382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.787 [2024-07-15 18:42:11.255276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.787 [2024-07-15 18:42:11.264504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.787 [2024-07-15 18:42:11.264788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.787 [2024-07-15 18:42:11.264803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.787 [2024-07-15 18:42:11.264810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.787 [2024-07-15 18:42:11.264981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.787 [2024-07-15 18:42:11.265153] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.787 [2024-07-15 18:42:11.265161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.787 [2024-07-15 18:42:11.265167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.787 [2024-07-15 18:42:11.267915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.787 [2024-07-15 18:42:11.277476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.787 [2024-07-15 18:42:11.277809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.787 18:42:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:25.787 [2024-07-15 18:42:11.277826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.787 [2024-07-15 18:42:11.277835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.787 [2024-07-15 18:42:11.278006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.787 [2024-07-15 18:42:11.278177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.787 [2024-07-15 18:42:11.278190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.787 18:42:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:25.787 [2024-07-15 18:42:11.278196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.787 18:42:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.787 18:42:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:25.787 [2024-07-15 18:42:11.280945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.787 [2024-07-15 18:42:11.281550] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:25.787 18:42:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.787 18:42:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:25.787 18:42:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.788 18:42:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:25.788 [2024-07-15 18:42:11.290501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.788 [2024-07-15 18:42:11.290850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.788 [2024-07-15 18:42:11.290866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.788 [2024-07-15 18:42:11.290873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.788 [2024-07-15 18:42:11.291044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.788 [2024-07-15 18:42:11.291219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.788 [2024-07-15 18:42:11.291227] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.788 [2024-07-15 18:42:11.291233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.788 [2024-07-15 18:42:11.293979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.788 [2024-07-15 18:42:11.303542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.788 [2024-07-15 18:42:11.303970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.788 [2024-07-15 18:42:11.303986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.788 [2024-07-15 18:42:11.303993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.788 [2024-07-15 18:42:11.304164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.788 [2024-07-15 18:42:11.304342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.788 [2024-07-15 18:42:11.304351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.788 [2024-07-15 18:42:11.304358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.788 [2024-07-15 18:42:11.307098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.788 Malloc0 00:27:25.788 18:42:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.788 18:42:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:25.788 [2024-07-15 18:42:11.316496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.788 18:42:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.788 [2024-07-15 18:42:11.316925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.788 [2024-07-15 18:42:11.316944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.788 [2024-07-15 18:42:11.316952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.788 18:42:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:25.788 [2024-07-15 18:42:11.317124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.788 [2024-07-15 18:42:11.317296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.788 [2024-07-15 18:42:11.317304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.788 [2024-07-15 18:42:11.317310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.788 [2024-07-15 18:42:11.320050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.788 18:42:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.788 18:42:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:25.788 18:42:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.788 18:42:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:25.788 [2024-07-15 18:42:11.329448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.788 [2024-07-15 18:42:11.329831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.788 [2024-07-15 18:42:11.329846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b980 with addr=10.0.0.2, port=4420 00:27:25.788 [2024-07-15 18:42:11.329853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b980 is same with the state(5) to be set 00:27:25.788 [2024-07-15 18:42:11.330025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b980 (9): Bad file descriptor 00:27:25.788 [2024-07-15 18:42:11.330196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.788 [2024-07-15 18:42:11.330204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.788 [2024-07-15 18:42:11.330210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.788 [2024-07-15 18:42:11.332954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.788 18:42:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.788 18:42:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:25.788 18:42:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.788 18:42:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:25.788 [2024-07-15 18:42:11.339253] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:26.047 [2024-07-15 18:42:11.342520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:26.047 18:42:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.047 18:42:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 4069585 00:27:26.047 [2024-07-15 18:42:11.372018] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:36.016 00:27:36.016 Latency(us) 00:27:36.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.016 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:36.016 Verification LBA range: start 0x0 length 0x4000 00:27:36.016 Nvme1n1 : 15.01 8482.72 33.14 12772.68 0.00 6002.32 651.46 16976.94 00:27:36.016 =================================================================================================================== 00:27:36.016 Total : 8482.72 33.14 12772.68 0.00 6002.32 651.46 16976.94 00:27:36.016 18:42:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:36.016 18:42:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:36.016 18:42:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.016 18:42:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:36.016 18:42:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.016 18:42:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:36.016 18:42:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:36.016 18:42:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:36.016 18:42:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:27:36.017 18:42:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:36.017 18:42:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:27:36.017 18:42:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:36.017 18:42:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:36.017 rmmod nvme_tcp 00:27:36.017 rmmod nvme_fabrics 00:27:36.017 rmmod nvme_keyring 00:27:36.017 18:42:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:36.017 18:42:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:27:36.017 18:42:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:27:36.017 18:42:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 4071009 ']' 00:27:36.017 18:42:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 4071009 00:27:36.017 18:42:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 4071009 ']' 00:27:36.017 18:42:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 4071009 00:27:36.017 18:42:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:27:36.017 18:42:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:36.017 18:42:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4071009 00:27:36.017 18:42:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:36.017 18:42:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:36.017 18:42:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4071009' 00:27:36.017 killing process with pid 4071009 00:27:36.017 18:42:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 4071009 00:27:36.017 18:42:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 4071009 00:27:36.017 18:42:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:36.017 18:42:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:36.017 18:42:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:36.017 18:42:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:36.017 18:42:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:36.017 18:42:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.017 18:42:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:36.017 18:42:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.980 18:42:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:36.980 00:27:36.980 real 0m26.605s 00:27:36.980 user 1m3.587s 00:27:36.980 sys 0m6.477s 00:27:36.980 18:42:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:36.980 18:42:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:36.980 ************************************ 00:27:36.980 END TEST nvmf_bdevperf 00:27:36.980 ************************************ 00:27:37.263 18:42:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:37.263 18:42:22 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:37.263 18:42:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:37.263 18:42:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:37.263 18:42:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:37.263 ************************************ 00:27:37.263 START TEST nvmf_target_disconnect 00:27:37.263 ************************************ 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:37.263 * Looking for test storage... 00:27:37.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.263 18:42:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:37.264 18:42:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.264 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:37.264 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:37.264 18:42:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:27:37.264 18:42:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:43.827 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:43.827 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:43.827 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:43.828 Found net devices under 0000:86:00.0: cvl_0_0 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:43.828 Found net devices under 0000:86:00.1: cvl_0_1 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:43.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:43.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:27:43.828 00:27:43.828 --- 10.0.0.2 ping statistics --- 00:27:43.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.828 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:43.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:43.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:27:43.828 00:27:43.828 --- 10.0.0.1 ping statistics --- 00:27:43.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.828 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:43.828 ************************************ 00:27:43.828 START TEST nvmf_target_disconnect_tc1 00:27:43.828 ************************************ 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:43.828 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.828 [2024-07-15 18:42:28.573040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.828 [2024-07-15 18:42:28.573138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x510e60 with addr=10.0.0.2, port=4420 00:27:43.828 [2024-07-15 18:42:28.573185] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:43.828 [2024-07-15 18:42:28.573210] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:43.828 [2024-07-15 18:42:28.573229] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:27:43.828 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:43.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:43.828 Initializing NVMe Controllers 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:43.828 00:27:43.828 real 0m0.113s 00:27:43.828 user 0m0.038s 00:27:43.828 sys 0m0.075s 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:43.828 ************************************ 00:27:43.828 END TEST nvmf_target_disconnect_tc1 00:27:43.828 ************************************ 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:43.828 ************************************ 00:27:43.828 START TEST nvmf_target_disconnect_tc2 00:27:43.828 ************************************ 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=4076059 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 4076059 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 4076059 ']' 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:43.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:43.828 18:42:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.828 [2024-07-15 18:42:28.705073] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:27:43.828 [2024-07-15 18:42:28.705113] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:43.828 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.828 [2024-07-15 18:42:28.775763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:43.828 [2024-07-15 18:42:28.853907] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:43.828 [2024-07-15 18:42:28.853942] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:43.828 [2024-07-15 18:42:28.853949] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:43.828 [2024-07-15 18:42:28.853954] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:43.828 [2024-07-15 18:42:28.853959] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:43.828 [2024-07-15 18:42:28.854075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:43.828 [2024-07-15 18:42:28.854203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:43.828 [2024-07-15 18:42:28.854284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:43.828 [2024-07-15 18:42:28.854285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.130 Malloc0 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.130 [2024-07-15 18:42:29.577660] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.130 [2024-07-15 18:42:29.602533] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=4076307 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:44.130 18:42:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:44.130 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.686 18:42:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 4076059 00:27:46.686 18:42:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:46.686 Read completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Read completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Read completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Write completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Write completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Write completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Read completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Write completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Read completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Write completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Write completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Write completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Read completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Read completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Read completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Read completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Read completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Read completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Write completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Write completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Read completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Read completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Write completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Read completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Write completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Write completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Write completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Write completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Read completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Read completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Write completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Write completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Read completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Read completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Read completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 [2024-07-15 18:42:31.629659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:46.686 Read completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Read completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Read completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Read completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Read completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Read completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Read completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Read completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Read completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Read completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Write completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Read completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Read completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Read completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Write completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.686 Write completed with error (sct=0, sc=8) 00:27:46.686 starting I/O failed 00:27:46.687 Read completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Read completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Read completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Read completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Write completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Read completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Read completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Write completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Read completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Read completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Read completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Read completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Read completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 [2024-07-15 18:42:31.629863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:46.687 Read completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Read completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Read completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Read completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Write completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Write completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Write completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Write completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Write completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Write completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Read completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Read completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Write completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Read completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Write completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Write completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Read completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Read completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Read completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Write completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Write completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Write completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Read completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Read completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Write completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Read completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Write completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Read completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Write completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Read completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Read completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 Write completed with error (sct=0, sc=8) 00:27:46.687 starting I/O failed 00:27:46.687 [2024-07-15 18:42:31.630061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:46.687 [2024-07-15 18:42:31.630238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.687 [2024-07-15 18:42:31.630254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.687 qpair failed and we were unable to recover it. 00:27:46.687 [2024-07-15 18:42:31.630460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.687 [2024-07-15 18:42:31.630471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.687 qpair failed and we were unable to recover it. 00:27:46.687 [2024-07-15 18:42:31.630622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.687 [2024-07-15 18:42:31.630631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.687 qpair failed and we were unable to recover it. 00:27:46.687 [2024-07-15 18:42:31.630828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.687 [2024-07-15 18:42:31.630837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.687 qpair failed and we were unable to recover it. 00:27:46.687 [2024-07-15 18:42:31.630926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.687 [2024-07-15 18:42:31.630936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.687 qpair failed and we were unable to recover it. 00:27:46.687 [2024-07-15 18:42:31.631079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.687 [2024-07-15 18:42:31.631091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.687 qpair failed and we were unable to recover it. 00:27:46.687 [2024-07-15 18:42:31.631164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.687 [2024-07-15 18:42:31.631173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.687 qpair failed and we were unable to recover it. 00:27:46.687 [2024-07-15 18:42:31.631305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.687 [2024-07-15 18:42:31.631314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.687 qpair failed and we were unable to recover it. 00:27:46.687 [2024-07-15 18:42:31.631463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.687 [2024-07-15 18:42:31.631492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.687 qpair failed and we were unable to recover it. 00:27:46.687 [2024-07-15 18:42:31.631587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.687 [2024-07-15 18:42:31.631596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.687 qpair failed and we were unable to recover it. 00:27:46.687 [2024-07-15 18:42:31.631722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.687 [2024-07-15 18:42:31.631731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.687 qpair failed and we were unable to recover it. 00:27:46.687 [2024-07-15 18:42:31.631869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.687 [2024-07-15 18:42:31.631878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.687 qpair failed and we were unable to recover it. 00:27:46.687 [2024-07-15 18:42:31.631963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.687 [2024-07-15 18:42:31.631972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.687 qpair failed and we were unable to recover it. 00:27:46.687 [2024-07-15 18:42:31.632127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.687 [2024-07-15 18:42:31.632137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.687 qpair failed and we were unable to recover it. 00:27:46.687 [2024-07-15 18:42:31.632221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.687 [2024-07-15 18:42:31.632231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.687 qpair failed and we were unable to recover it. 00:27:46.687 [2024-07-15 18:42:31.632367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.687 [2024-07-15 18:42:31.632380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.687 qpair failed and we were unable to recover it. 00:27:46.687 [2024-07-15 18:42:31.632546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.687 [2024-07-15 18:42:31.632556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.687 qpair failed and we were unable to recover it. 00:27:46.687 [2024-07-15 18:42:31.632792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.687 [2024-07-15 18:42:31.632822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.687 qpair failed and we were unable to recover it. 00:27:46.687 [2024-07-15 18:42:31.632991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.687 [2024-07-15 18:42:31.633020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.687 qpair failed and we were unable to recover it. 00:27:46.687 [2024-07-15 18:42:31.633211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.687 [2024-07-15 18:42:31.633240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.687 qpair failed and we were unable to recover it. 00:27:46.687 [2024-07-15 18:42:31.633483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.687 [2024-07-15 18:42:31.633494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.687 qpair failed and we were unable to recover it. 00:27:46.687 [2024-07-15 18:42:31.633641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.687 [2024-07-15 18:42:31.633650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.687 qpair failed and we were unable to recover it. 00:27:46.687 [2024-07-15 18:42:31.633794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.687 [2024-07-15 18:42:31.633804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.687 qpair failed and we were unable to recover it. 00:27:46.687 [2024-07-15 18:42:31.634029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.687 [2024-07-15 18:42:31.634059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.687 qpair failed and we were unable to recover it. 00:27:46.687 [2024-07-15 18:42:31.634164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.634193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.634384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.634416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.634599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.634609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.634751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.634761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.634952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.634982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.635229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.635258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.635389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.635399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.635558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.635568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.635646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.635654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.635777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.635787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.635926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.635935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.636024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.636059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.636225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.636236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.636378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.636389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.636454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.636463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.636635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.636646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.636745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.636754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.636842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.636851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.636980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.636990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.637065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.637075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.637198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.637208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.637301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.637310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.637502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.637512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.637670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.637680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.637755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.637764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.637854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.637863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.637939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.637948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.638098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.638108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.638303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.638313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.638391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.638402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.638468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.638478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.638618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.638627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.638709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.638719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.638811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.638819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.638893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.638904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.638975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.638987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.639082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.639094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.639244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.639256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.639388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.639405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.639561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.688 [2024-07-15 18:42:31.639575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.688 qpair failed and we were unable to recover it. 00:27:46.688 [2024-07-15 18:42:31.639671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.639684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.639762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.639774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.639987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.640018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.640133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.640162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.640348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.640379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.640507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.640536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.640691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.640704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.640841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.640854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.640934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.640945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.641096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.641109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.641263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.641276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.641347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.641359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.641514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.641527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.641740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.641753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.641901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.641914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.642084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.642097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.642270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.642283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.642423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.642436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.642516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.642528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.642602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.642614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.642691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.642702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.642847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.642860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.642988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.643000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.643152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.643165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.643252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.643263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.643326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.643349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.643486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.643498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.643636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.643650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.643795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.643808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.643897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.643909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.643994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.644005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.644086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.644098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.644178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.644191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.644346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.644359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.644503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.644516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.644747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.644776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.644958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.644987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.645094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.645123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 [2024-07-15 18:42:31.645245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.689 [2024-07-15 18:42:31.645258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.689 qpair failed and we were unable to recover it. 00:27:46.689 Read completed with error (sct=0, sc=8) 00:27:46.689 starting I/O failed 00:27:46.689 Read completed with error (sct=0, sc=8) 00:27:46.689 starting I/O failed 00:27:46.689 Read completed with error (sct=0, sc=8) 00:27:46.689 starting I/O failed 00:27:46.689 Read completed with error (sct=0, sc=8) 00:27:46.689 starting I/O failed 00:27:46.689 Read completed with error (sct=0, sc=8) 00:27:46.689 starting I/O failed 00:27:46.689 Read completed with error (sct=0, sc=8) 00:27:46.689 starting I/O failed 00:27:46.689 Read completed with error (sct=0, sc=8) 00:27:46.689 starting I/O failed 00:27:46.689 Read completed with error (sct=0, sc=8) 00:27:46.690 starting I/O failed 00:27:46.690 Read completed with error (sct=0, sc=8) 00:27:46.690 starting I/O failed 00:27:46.690 Read completed with error (sct=0, sc=8) 00:27:46.690 starting I/O failed 00:27:46.690 Read completed with error (sct=0, sc=8) 00:27:46.690 starting I/O failed 00:27:46.690 Read completed with error (sct=0, sc=8) 00:27:46.690 starting I/O failed 00:27:46.690 Read completed with error (sct=0, sc=8) 00:27:46.690 starting I/O failed 00:27:46.690 Read completed with error (sct=0, sc=8) 00:27:46.690 starting I/O failed 00:27:46.690 Write completed with error (sct=0, sc=8) 00:27:46.690 starting I/O failed 00:27:46.690 Write completed with error (sct=0, sc=8) 00:27:46.690 starting I/O failed 00:27:46.690 Write completed with error (sct=0, sc=8) 00:27:46.690 starting I/O failed 00:27:46.690 Read completed with error (sct=0, sc=8) 00:27:46.690 starting I/O failed 00:27:46.690 Write completed with error (sct=0, sc=8) 00:27:46.690 starting I/O failed 00:27:46.690 Read completed with error (sct=0, sc=8) 00:27:46.690 starting I/O failed 00:27:46.690 Read completed with error (sct=0, sc=8) 00:27:46.690 starting I/O failed 00:27:46.690 Write completed with error (sct=0, sc=8) 00:27:46.690 starting I/O failed 00:27:46.690 Read completed with error (sct=0, sc=8) 00:27:46.690 starting I/O failed 00:27:46.690 Write completed with error (sct=0, sc=8) 00:27:46.690 starting I/O failed 00:27:46.690 Write completed with error (sct=0, sc=8) 00:27:46.690 starting I/O failed 00:27:46.690 Read completed with error (sct=0, sc=8) 00:27:46.690 starting I/O failed 00:27:46.690 Read completed with error (sct=0, sc=8) 00:27:46.690 starting I/O failed 00:27:46.690 Read completed with error (sct=0, sc=8) 00:27:46.690 starting I/O failed 00:27:46.690 Read completed with error (sct=0, sc=8) 00:27:46.690 starting I/O failed 00:27:46.690 Read completed with error (sct=0, sc=8) 00:27:46.690 starting I/O failed 00:27:46.690 Write completed with error (sct=0, sc=8) 00:27:46.690 starting I/O failed 00:27:46.690 Read completed with error (sct=0, sc=8) 00:27:46.690 starting I/O failed 00:27:46.690 [2024-07-15 18:42:31.645525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.690 [2024-07-15 18:42:31.645586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.690 [2024-07-15 18:42:31.645600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.690 qpair failed and we were unable to recover it. 00:27:46.690 [2024-07-15 18:42:31.645758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.690 [2024-07-15 18:42:31.645771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.690 qpair failed and we were unable to recover it. 00:27:46.690 [2024-07-15 18:42:31.645835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.690 [2024-07-15 18:42:31.645847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.690 qpair failed and we were unable to recover it. 00:27:46.690 [2024-07-15 18:42:31.645921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.690 [2024-07-15 18:42:31.645934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.690 qpair failed and we were unable to recover it. 00:27:46.690 [2024-07-15 18:42:31.646154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.690 [2024-07-15 18:42:31.646167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.690 qpair failed and we were unable to recover it. 00:27:46.690 [2024-07-15 18:42:31.646301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.690 [2024-07-15 18:42:31.646314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.690 qpair failed and we were unable to recover it. 00:27:46.690 [2024-07-15 18:42:31.646393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.690 [2024-07-15 18:42:31.646406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.690 qpair failed and we were unable to recover it. 00:27:46.690 [2024-07-15 18:42:31.646513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.690 [2024-07-15 18:42:31.646526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.690 qpair failed and we were unable to recover it. 00:27:46.690 [2024-07-15 18:42:31.646660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.690 [2024-07-15 18:42:31.646673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.690 qpair failed and we were unable to recover it. 00:27:46.690 [2024-07-15 18:42:31.646824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.690 [2024-07-15 18:42:31.646837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.690 qpair failed and we were unable to recover it. 00:27:46.690 [2024-07-15 18:42:31.646955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.690 [2024-07-15 18:42:31.646968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.690 qpair failed and we were unable to recover it. 00:27:46.690 [2024-07-15 18:42:31.647052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.690 [2024-07-15 18:42:31.647064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.690 qpair failed and we were unable to recover it. 00:27:46.690 [2024-07-15 18:42:31.647149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.690 [2024-07-15 18:42:31.647161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.690 qpair failed and we were unable to recover it. 00:27:46.690 [2024-07-15 18:42:31.647364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.690 [2024-07-15 18:42:31.647377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.690 qpair failed and we were unable to recover it. 00:27:46.690 [2024-07-15 18:42:31.647462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.690 [2024-07-15 18:42:31.647473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.690 qpair failed and we were unable to recover it. 00:27:46.690 [2024-07-15 18:42:31.647556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.690 [2024-07-15 18:42:31.647568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.690 qpair failed and we were unable to recover it. 00:27:46.690 [2024-07-15 18:42:31.647643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.690 [2024-07-15 18:42:31.647654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.690 qpair failed and we were unable to recover it. 00:27:46.690 [2024-07-15 18:42:31.647790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.690 [2024-07-15 18:42:31.647803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.690 qpair failed and we were unable to recover it. 00:27:46.690 [2024-07-15 18:42:31.647902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.690 [2024-07-15 18:42:31.647915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.690 qpair failed and we were unable to recover it. 00:27:46.690 [2024-07-15 18:42:31.647987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.690 [2024-07-15 18:42:31.647999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.690 qpair failed and we were unable to recover it. 00:27:46.690 [2024-07-15 18:42:31.648069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.690 [2024-07-15 18:42:31.648081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.690 qpair failed and we were unable to recover it. 00:27:46.690 [2024-07-15 18:42:31.648224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.690 [2024-07-15 18:42:31.648237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.690 qpair failed and we were unable to recover it. 00:27:46.690 [2024-07-15 18:42:31.648328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.690 [2024-07-15 18:42:31.648350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.690 qpair failed and we were unable to recover it. 00:27:46.690 [2024-07-15 18:42:31.648552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.690 [2024-07-15 18:42:31.648566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.690 qpair failed and we were unable to recover it. 00:27:46.690 [2024-07-15 18:42:31.648662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.690 [2024-07-15 18:42:31.648675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.690 qpair failed and we were unable to recover it. 00:27:46.690 [2024-07-15 18:42:31.648847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.690 [2024-07-15 18:42:31.648864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.690 qpair failed and we were unable to recover it. 00:27:46.690 [2024-07-15 18:42:31.649026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.690 [2024-07-15 18:42:31.649042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.649183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.649200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.649352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.649369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.649527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.649543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.649633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.649649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.649801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.649817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.649895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.649911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.650000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.650020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.650161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.650177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.650334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.650364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.650505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.650544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.650667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.650695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.650811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.650840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.651133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.651162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.651279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.651307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.651518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.651549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.651716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.651745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.651918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.651946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.652125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.652154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.652335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.652375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.652631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.652659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.652898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.652928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.653115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.653144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.653389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.653420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.653594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.653611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.653689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.653705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.653926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.653943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.654098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.654115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.654197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.654213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.654388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.654405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.654586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.654615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.654751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.654780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.654875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.654904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.655009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.655037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.655143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.655173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.655288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.655316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.655516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.691 [2024-07-15 18:42:31.655555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.691 qpair failed and we were unable to recover it. 00:27:46.691 [2024-07-15 18:42:31.655763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.655779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.655990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.656006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.656180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.656196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.656370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.656399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.656659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.656688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.656873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.656902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.657073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.657102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.657350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.657380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.657613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.657642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.657851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.657880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.658084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.658118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.658373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.658391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.658566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.658582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.658837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.658853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.659063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.659093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.659278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.659307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.659521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.659552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.659736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.659758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.659976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.659999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.660160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.660182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.660290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.660313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.660483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.660507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.660730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.660753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.660895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.660918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.661097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.661120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.661278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.661301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.661409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.661433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.661604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.661626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.661732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.661754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.661906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.661929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.662102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.662124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.662352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.662396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.662563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.662592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.662850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.662879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.663153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.663181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.663375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.663405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.663616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.663638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.663821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.663843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.664038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.664060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.664169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.664191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.664289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.692 [2024-07-15 18:42:31.664312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.692 qpair failed and we were unable to recover it. 00:27:46.692 [2024-07-15 18:42:31.664490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.664513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.664737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.664759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.665006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.665035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.665168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.665197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.665374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.665404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.665597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.665619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.665792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.665821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.666067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.666095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.666265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.666293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.666480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.666507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.666624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.666646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.666826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.666849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.667021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.667043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.667245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.667273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.667384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.667414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.667650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.667679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.667910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.667939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.668055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.668084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.668263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.668293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.668488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.668518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.668647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.668676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.668846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.668875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.669048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.669077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.669283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.669313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.669503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.669534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.669721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.669750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.669851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.669880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.670138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.670167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.670417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.670447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.670612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.670642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.670827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.670856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.671077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.671106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.671221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.671250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.671431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.671462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.671585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.671614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.671744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.671773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.672056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.672125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.672364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.672399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.672509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.672539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.672678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.672708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.693 [2024-07-15 18:42:31.672881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.693 [2024-07-15 18:42:31.672910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.693 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.673109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.673139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.673318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.673359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.673596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.673625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.673812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.673841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.673976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.674005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.674202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.674230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.674406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.674436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.674622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.674651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.674778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.674807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.674986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.675015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.675133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.675162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.675343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.675376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.675614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.675644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.675777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.675806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.675937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.675965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.676258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.676288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.676560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.676589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.676828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.676857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.676971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.676999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.677262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.677291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.677566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.677596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.677836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.677865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.678144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.678179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.678300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.678329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.678456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.678485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.678749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.678780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.678971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.679000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.679238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.679267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.679391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.679441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.679574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.679604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.679738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.679767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.679942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.679972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.680233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.680263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.680528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.680558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.680734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.680764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.680949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.680979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.681112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.681141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.681254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.681284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.681421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.681451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.681685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.681714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.681979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.694 [2024-07-15 18:42:31.682008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.694 qpair failed and we were unable to recover it. 00:27:46.694 [2024-07-15 18:42:31.682219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.695 [2024-07-15 18:42:31.682248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.695 qpair failed and we were unable to recover it. 00:27:46.695 [2024-07-15 18:42:31.682505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.695 [2024-07-15 18:42:31.682537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.695 qpair failed and we were unable to recover it. 00:27:46.695 [2024-07-15 18:42:31.682706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.695 [2024-07-15 18:42:31.682735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.695 qpair failed and we were unable to recover it. 00:27:46.695 [2024-07-15 18:42:31.682919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.695 [2024-07-15 18:42:31.682948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.695 qpair failed and we were unable to recover it. 00:27:46.695 [2024-07-15 18:42:31.683088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.695 [2024-07-15 18:42:31.683118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.695 qpair failed and we were unable to recover it. 00:27:46.695 [2024-07-15 18:42:31.683298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.695 [2024-07-15 18:42:31.683328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.695 qpair failed and we were unable to recover it. 00:27:46.695 [2024-07-15 18:42:31.683582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.695 [2024-07-15 18:42:31.683613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.695 qpair failed and we were unable to recover it. 00:27:46.695 [2024-07-15 18:42:31.683797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.695 [2024-07-15 18:42:31.683826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.695 qpair failed and we were unable to recover it. 00:27:46.695 [2024-07-15 18:42:31.684031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.695 [2024-07-15 18:42:31.684072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.695 qpair failed and we were unable to recover it. 00:27:46.695 [2024-07-15 18:42:31.684188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.695 [2024-07-15 18:42:31.684216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.695 qpair failed and we were unable to recover it. 00:27:46.695 [2024-07-15 18:42:31.684436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.695 [2024-07-15 18:42:31.684468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.684572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.684602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.684839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.684869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.684984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.685013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.685221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.685251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.685438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.685469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.685603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.685633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.685821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.685850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.686086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.686115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.686318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.686355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.686529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.686558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.686742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.686773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.686943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.686973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.687140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.687170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.687301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.687330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.687540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.687571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.687782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.687811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.687994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.688025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.688291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.688322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.688504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.688534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.688732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.688762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.689046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.689076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.689267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.689298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.689453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.689484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.689676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.689705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.689887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.689916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.690025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.690056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.690240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.690270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.690391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.690422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.690598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.690628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.690805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.690835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.691023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.691054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.691324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.691362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.691544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.691574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.691803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.691833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.692100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.692130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.692301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.692330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.692474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.692503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.692641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.692672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.692857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.692892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.693126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.693156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.693270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.696 [2024-07-15 18:42:31.693299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.696 qpair failed and we were unable to recover it. 00:27:46.696 [2024-07-15 18:42:31.693500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.693530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.693714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.693744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.693947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.693977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.694152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.694182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.694361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.694392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.694491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.694520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.694696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.694725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.694920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.694950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.695118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.695147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.695356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.695386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.695561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.695590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.695722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.695752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.695863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.695891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.696024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.696053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.696176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.696206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.696388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.696417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.696589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.696618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.696732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.696762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.696976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.697006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.697288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.697318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.697535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.697565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.697753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.697783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.697985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.698015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.698196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.698226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.698353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.698389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.698628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.698658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.698789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.698819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.698997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.699027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.699216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.699245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.699462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.699493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.699601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.699631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.699741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.699770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.699951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.699982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.700159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.700188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.700318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.700354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.700542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.700571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.700760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.700788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.700999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.701029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.701148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.701179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.701423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.701454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.701631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.701661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-15 18:42:31.701840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.697 [2024-07-15 18:42:31.701868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.702038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.702067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.702255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.702285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.702403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.702435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.702613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.702642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.702813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.702842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.703044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.703074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.703181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.703210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.703387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.703418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.703672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.703701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.703823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.703852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.703964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.703994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.704175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.704205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.704438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.704469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.704651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.704679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.704790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.704819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.705075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.705105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.705282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.705312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.705453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.705482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.705660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.705689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.705806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.705835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.706089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.706118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.706291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.706321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.706533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.706563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.706768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.706804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.706969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.706998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.707189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.707218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.707404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.707436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.707542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.707572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.707744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.707773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.707982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.708012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.708209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.708239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.708500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.708530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.708650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.708680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.708858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.708888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.709056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.709085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.709189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.709218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.709381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.709414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.709557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.709586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.709824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.709853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.709984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.710012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.710188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.698 [2024-07-15 18:42:31.710218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-15 18:42:31.710344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.710372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.710561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.710590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.710785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.710814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.711089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.711118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.711239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.711269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.711462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.711492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.711599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.711627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.711814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.711843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.712037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.712066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.712244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.712274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.712528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.712558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.712659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.712688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.712792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.712821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.712940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.712968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.713142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.713171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.713438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.713468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.713679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.713709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.713945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.713975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.714098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.714128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.714307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.714344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.714442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.714471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.714640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.714670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.714790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.714819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.714990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.715058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.715185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.715218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.715391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.715422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.715681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.715711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.715908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.715938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.716108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.716138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.716254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.716283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.716409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.716440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.716612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.716642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.716849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.716878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.717051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.717089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.717222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.717252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.717496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.717526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.717814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.717852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.718037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.718067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.718253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.718282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.718406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.718437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.718618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.699 [2024-07-15 18:42:31.718648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.699 qpair failed and we were unable to recover it. 00:27:46.699 [2024-07-15 18:42:31.718823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.718852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.719039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.719068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.719191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.719220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.719454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.719485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.719663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.719692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.719952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.719981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.720213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.720242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.720421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.720451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.720584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.720613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.720734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.720763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.720939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.720969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.721186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.721215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.721329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.721369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.721574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.721603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.721769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.721799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.722056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.722086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.722324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.722362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.722531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.722560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.722746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.722775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.722899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.722928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.723039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.723068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.723252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.723281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.723462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.723502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.723678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.723708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.723909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.723938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.724197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.724227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.724417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.724448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.724686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.724715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.724831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.724861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.724972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.725001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.725129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.725158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.725412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.725442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.725553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.725582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.725782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.725811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.725992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.726023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.726147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.700 [2024-07-15 18:42:31.726176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.700 qpair failed and we were unable to recover it. 00:27:46.700 [2024-07-15 18:42:31.726308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.726345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.726525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.726555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.726739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.726768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.726937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.726966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.727202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.727231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.727412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.727442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.727541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.727570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.727737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.727766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.727971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.728001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.728133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.728162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.728398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.728428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.728661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.728690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.728802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.728831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.729007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.729036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.729134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.729164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.729291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.729321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.729510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.729541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.729709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.729739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.730006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.730036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.730155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.730184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.730307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.730346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.730556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.730585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.730686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.730714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.730969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.730998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.731243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.731272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.731471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.731501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.731612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.731647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.731764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.731793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.731923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.731953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.732081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.732109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.732218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.732247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.732480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.732510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.732631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.732659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.732843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.732873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.733048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.733076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.733247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.733276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.733398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.733428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.733600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.733629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.733868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.733898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.734135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.734164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.734351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.701 [2024-07-15 18:42:31.734382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.701 qpair failed and we were unable to recover it. 00:27:46.701 [2024-07-15 18:42:31.734565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.734594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.734715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.734744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.735013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.735042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.735226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.735256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.735394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.735424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.735691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.735720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.735851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.735880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.736011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.736042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.736240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.736269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.736472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.736503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.736738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.736767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.736901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.736930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.737238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.737267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.737519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.737549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.737752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.737781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.737966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.737995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.738230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.738259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.738373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.738404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.738667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.738695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.738815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.738844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.738963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.738992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.739246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.739275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.739475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.739505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.739696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.739724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.739907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.739936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.740124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.740159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.740324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.740364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.740572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.740601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.740786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.740815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.741047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.741076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.741196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.741225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.741397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.741428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.741540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.741569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.741811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.741840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.742032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.742062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.742185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.742214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.742449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.742479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.742752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.742781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.742908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.742937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.743123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.743152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.743268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.702 [2024-07-15 18:42:31.743297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.702 qpair failed and we were unable to recover it. 00:27:46.702 [2024-07-15 18:42:31.743586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.743616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.743852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.743881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.744010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.744039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.744217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.744246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.744419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.744448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.744698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.744728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.744899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.744929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.745125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.745154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.745399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.745429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.745605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.745634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.745918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.745948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.746085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.746114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.746242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.746271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.746387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.746421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.746588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.746618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.746746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.746776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.746958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.746987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.747219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.747248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.747428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.747459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.747580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.747609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.747787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.747817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.747931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.747960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.748070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.748099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.748279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.748309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.748490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.748567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.748784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.748817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.749024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.749054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.749225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.749254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.749522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.749554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.749806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.749836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.750021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.750050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.750183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.750212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.750400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.750431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.750611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.750640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.750826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.750856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.751032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.751062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.751350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.751380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.751510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.751539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.751716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.751746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.751874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.751904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.752021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.703 [2024-07-15 18:42:31.752051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.703 qpair failed and we were unable to recover it. 00:27:46.703 [2024-07-15 18:42:31.752303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.752331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.752576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.752606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.752869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.752898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.753080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.753109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.753287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.753317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.753449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.753479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.753651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.753680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.753886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.753916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.754027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.754057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.754243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.754272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.754538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.754569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.754696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.754725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.754907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.754935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.755126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.755156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.755384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.755414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.755612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.755642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.755833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.755862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.756033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.756061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.756271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.756300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.756487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.756517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.756619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.756649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.756870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.756901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.757156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.757185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.757366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.757409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.757618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.757647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.757814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.757843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.757965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.757993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.758171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.758202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.758329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.758374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.758548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.758576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.758762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.758792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.758917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.758947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.759065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.759094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.759200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.759230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.759537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.759568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.704 [2024-07-15 18:42:31.759740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.704 [2024-07-15 18:42:31.759770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.704 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.759949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.759979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.760235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.760266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.760384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.760414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.760601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.760630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.760745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.760774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.760965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.760994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.761173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.761201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.761469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.761499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.761620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.761649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.761756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.761786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.761967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.761995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.762200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.762228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.762370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.762399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.762585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.762615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.762793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.762823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.763013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.763042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.763327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.763368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.763611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.763641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.763902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.763932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.764190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.764219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.764453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.764483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.764688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.764717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.764905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.764936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.765145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.765175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.765288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.765317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.765495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.765524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.765714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.765743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.765927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.765963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.766069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.766098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.766220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.766249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.766452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.766482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.766677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.766707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.766831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.766860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.767031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.767060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.767321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.767357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.767544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.767573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.767854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.767883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.768086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.768116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.768293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.768322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.768566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.768596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.705 [2024-07-15 18:42:31.768830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.705 [2024-07-15 18:42:31.768859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.705 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.769075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.769105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.769236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.769266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.769445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.769476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.769657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.769687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.769858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.769888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.770119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.770148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.770384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.770415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.770675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.770705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.770882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.770912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.771104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.771133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.771251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.771280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.771489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.771519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.771805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.771835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.771954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.771984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.772153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.772182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.772285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.772314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.772584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.772613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.772797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.772826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.772958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.772987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.773180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.773210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.773446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.773478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.773659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.773688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.773802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.773831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.774067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.774097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.774302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.774331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.774553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.774583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.774709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.774744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.774948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.774978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.775149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.775178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.775358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.775388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.775571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.775601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.775796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.775825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.775990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.776020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.776294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.776324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.776514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.776543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.776722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.776751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.776927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.776957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.777079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.777109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.777274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.777303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.777481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.777511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.706 [2024-07-15 18:42:31.777632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.706 [2024-07-15 18:42:31.777663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.706 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.777842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.777871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.778054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.778084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.778285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.778315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.778523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.778553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.778746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.778776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.778940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.778969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.779095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.779124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.779373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.779403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.779675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.779705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.779825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.779854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.780045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.780074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.780176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.780205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.780448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.780478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.780649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.780678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.780864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.780893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.781079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.781109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.781214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.781244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.781385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.781416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.781652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.781682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.781864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.781894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.782086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.782115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.782300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.782330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.782517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.782547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.782831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.782860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.783060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.783090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.783346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.783382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.783589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.783622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.783801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.783831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.784026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.784056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.784294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.784323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.784525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.784554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.784725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.784754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.784875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.784903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.785135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.785165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.785372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.785402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.785642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.785672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.785875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.785904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.786009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.786039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.786207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.786236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.786371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.786401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.707 qpair failed and we were unable to recover it. 00:27:46.707 [2024-07-15 18:42:31.786514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.707 [2024-07-15 18:42:31.786544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.786730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.786759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.787000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.787028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.787211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.787240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.787507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.787536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.787714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.787743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.787975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.788006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.788265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.788293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.788445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.788474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.788710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.788740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.788910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.788940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.789115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.789145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.789397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.789429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.789608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.789638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.789807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.789835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.790004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.790033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.790215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.790245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.790482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.790513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.790682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.790713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.790891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.790920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.791044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.791073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.791334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.791371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.791606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.791636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.791816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.791846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.792029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.792058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.792236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.792271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.792470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.792500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.792684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.792713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.792900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.792929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.793100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.793130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.793298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.793328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.793598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.793629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.793728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.793757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.793952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.793981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.794193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.794222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.794437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.794468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.708 qpair failed and we were unable to recover it. 00:27:46.708 [2024-07-15 18:42:31.794646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.708 [2024-07-15 18:42:31.794675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.794786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.794815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.795011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.795039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.795228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.795257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.795431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.795460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.795583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.795613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.795792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.795821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.796082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.796112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.796282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.796310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.796576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.796606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.796790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.796819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.796938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.796967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.797086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.797115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.797237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.797265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.797451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.797483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.797595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.797624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.797807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.797877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.798019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.798052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.798313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.798358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.798602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.798633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.798819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.798849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.799031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.799060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.799172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.799201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.799469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.799501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.799710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.799739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.799927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.799956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.800216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.800246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.800431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.800461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.800582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.800612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.800872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.800901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.801037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.801066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.801255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.801284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.801563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.801596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.801764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.801794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.802049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.802078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.802274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.802304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.802492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.802521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.802739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.802768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.803027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.803055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.803296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.803326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.803580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.803610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.709 [2024-07-15 18:42:31.803848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.709 [2024-07-15 18:42:31.803877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.709 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.804060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.804089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.804211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.804247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.804453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.804483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.804593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.804621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.804794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.804823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.804945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.804973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.805142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.805171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.805286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.805316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.805519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.805549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.805668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.805697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.805808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.805836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.806095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.806124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.806304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.806333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.806555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.806586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.806766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.806796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.807023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.807052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.807172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.807201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.807373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.807402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.807587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.807616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.807734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.807764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.807875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.807904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.808102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.808133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.808393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.808424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.808542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.808572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.808759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.808789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.808910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.808939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.809197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.809227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.809408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.809439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.809622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.809664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.809841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.809871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.810119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.810148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.810330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.810369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.810482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.810511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.810747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.810778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.810982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.811011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.811130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.811160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.811275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.811304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.811490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.811519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.811719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.811750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.811854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.811883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.811985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.710 [2024-07-15 18:42:31.812014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.710 qpair failed and we were unable to recover it. 00:27:46.710 [2024-07-15 18:42:31.812207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.812237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.812586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.812655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.812846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.812879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.813068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.813098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.813326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.813371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.813508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.813539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.813755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.813785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.813964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.813994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.814181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.814210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.814418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.814449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.814643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.814672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.814791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.814821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.815101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.815130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.815244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.815274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.815452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.815491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.815631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.815660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.815826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.815855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.816118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.816146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.816408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.816438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.816619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.816648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.816768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.816797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.817001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.817030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.817228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.817257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.817524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.817554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.817689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.817719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.817921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.817950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.818152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.818181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.818304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.818334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.818544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.818573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.818767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.818797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.818916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.818944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.819145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.819175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.819294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.819324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.819535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.819565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.819780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.819810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.820072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.820102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.820226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.820256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.820469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.820499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.820764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.820793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.820910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.820939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.821175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.711 [2024-07-15 18:42:31.821204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.711 qpair failed and we were unable to recover it. 00:27:46.711 [2024-07-15 18:42:31.821328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.821368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.821481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.821510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.821686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.821715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.822004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.822034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.822266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.822295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.822416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.822446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.822564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.822593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.822832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.822861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.823123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.823152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.823332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.823369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.823543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.823573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.823811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.823840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.824022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.824052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.824219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.824260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.824375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.824405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.824591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.824620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.824888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.824918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.825095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.825124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.825334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.825371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.825615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.825645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.825776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.825806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.826012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.826041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.826166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.826196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.826379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.826409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.826595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.826625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.826808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.826837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.827075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.827104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.827236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.827266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.827439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.827469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.827636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.827665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.827854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.827883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.828064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.828093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.828289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.828318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.828510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.828541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.828688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.828717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.828915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.712 [2024-07-15 18:42:31.828944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.712 qpair failed and we were unable to recover it. 00:27:46.712 [2024-07-15 18:42:31.829111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.829139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.829255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.829284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.829545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.829576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.829762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.829791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.829979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.830009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.830263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.830292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.830428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.830458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.830626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.830655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.830771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.830800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.831040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.831069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.831192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.831221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.831406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.831436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.831671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.831701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.831830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.831859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.832123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.832152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.832261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.832290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.832428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.832458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.832650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.832680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.832853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.832882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.833074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.833103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.833284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.833313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.833556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.833586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.833697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.833726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.833852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.833882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.833989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.834018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.834200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.834229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.834410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.834441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.834548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.834577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.834702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.834731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.834947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.834976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.835088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.835117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.835381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.835411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.835581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.835610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.835743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.835771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.835898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.835927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.836192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.836221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.836388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.836418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.836678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.836707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.836873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.836902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.837077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.837106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.713 [2024-07-15 18:42:31.837290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.713 [2024-07-15 18:42:31.837319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.713 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.837620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.837650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.837893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.837922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.838055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.838084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.838198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.838233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.838409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.838439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.838618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.838649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.838910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.838939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.839071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.839100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.839318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.839357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.839543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.839572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.839756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.839786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.839970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.839999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.840260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.840289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.840466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.840496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.840696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.840725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.841007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.841036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.841219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.841249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.841380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.841410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.841651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.841680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.841904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.841934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.842059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.842088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.842261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.842290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.842569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.842598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.842796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.842826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.843002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.843031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.843290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.843319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.843438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.843468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.843648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.843678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.843862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.843892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.844010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.844040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.844252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.844282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.844404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.844434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.844616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.844645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.844815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.844845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.845012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.845041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.845160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.845189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.845430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.845460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.845648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.845677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.845788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.845818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.845999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.846027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.714 [2024-07-15 18:42:31.846152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.714 [2024-07-15 18:42:31.846181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.714 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.846419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.846449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.846689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.846719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.846890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.846924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.847028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.847057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.847180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.847210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.847410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.847440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.847581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.847611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.847783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.847813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.847984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.848013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.848134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.848163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.848381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.848410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.848592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.848622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.848812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.848841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.848944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.848973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.849082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.849111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.849357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.849388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.849524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.849553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.849728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.849757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.849946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.849975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.850078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.850107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.850216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.850245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.850430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.850459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.850565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.850594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.850790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.850818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.850941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.850971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.851072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.851101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.851268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.851298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.851508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.851538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.851657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.851686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.851819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.851849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.852031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.852060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.852192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.852221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.852327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.852364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.852567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.852596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.852768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.852797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.852899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.852927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.853126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.853154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.853385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.853417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.853536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.853565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.853672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.853701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.715 [2024-07-15 18:42:31.853881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.715 [2024-07-15 18:42:31.853911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.715 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.854079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.854107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.854287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.854322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.854454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.854484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.854611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.854641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.854810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.854838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.855018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.855046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.855218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.855246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.855355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.855385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.855571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.855600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.855771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.855800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.855982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.856011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.856125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.856154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.856283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.856312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.856494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.856525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.856638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.856667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.856854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.856884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.856993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.857022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.857208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.857237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.857413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.857444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.857684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.857713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.857910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.857939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.858122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.858151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.858324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.858360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.858482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.858512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.858679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.858708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.858879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.858908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.859012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.859041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.859209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.859238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.859434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.859465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.859571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.859599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.859716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.859745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.859958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.859987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.860174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.860203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.860384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.860413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.860613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.860642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.860832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.860861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.860991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.861020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.861191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.861219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.861405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.861435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.861568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.861597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.716 [2024-07-15 18:42:31.861774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.716 [2024-07-15 18:42:31.861803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.716 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.861915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.861949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.862190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.862219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.862351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.862382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.862601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.862630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.862814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.862842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.862963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.862992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.863184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.863213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.863343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.863373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.863612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.863641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.863766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.863795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.864062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.864091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.864213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.864241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.864448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.864478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.864604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.864633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.864760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.864789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.864958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.864986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.865090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.865119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.865306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.865335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.865518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.865548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.865661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.865690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.865786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.865815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.865996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.866024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.866205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.866234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.866410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.866440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.866543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.866572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.866679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.866708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.866994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.867022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.867145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.867175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.867309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.867346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.867531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.867561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.867818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.867848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.867945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.867974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.868093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.868121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.868290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.868319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.868516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.717 [2024-07-15 18:42:31.868545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.717 qpair failed and we were unable to recover it. 00:27:46.717 [2024-07-15 18:42:31.868661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.868690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.868871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.868899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.869012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.869041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.869276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.869306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.869512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.869543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.869661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.869696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.869813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.869842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.870075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.870105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.870366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.870397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.870516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.870545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.870721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.870750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.870987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.871018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.871142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.871173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.871288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.871319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.871566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.871596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.871696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.871725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.871840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.871869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.871968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.871997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.872168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.872198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.872372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.872402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.872581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.872610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.872784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.872814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.872923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.872952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.873066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.873095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.873265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.873295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.873471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.873501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.873673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.873703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.873946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.873975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.874166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.874195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.874474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.874504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.874680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.874710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.874897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.874926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.875042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.875072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.875207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.875237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.875357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.875388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.875509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.875537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.875719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.875748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.875930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.875960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.876088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.876116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.876363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.876394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.876518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.718 [2024-07-15 18:42:31.876548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.718 qpair failed and we were unable to recover it. 00:27:46.718 [2024-07-15 18:42:31.876724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.876753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.876942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.876972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.877158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.877187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.877363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.877394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.877601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.877636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.877880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.877908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.878090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.878119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.878240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.878269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.878464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.878494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.878613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.878641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.878746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.878775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.878948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.878977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.879101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.879130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.879264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.879293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.879430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.879460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.879641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.879670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.879798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.879827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.880046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.880076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.880317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.880358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.880493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.880522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.880761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.880790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.880892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.880921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.881109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.881139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.881322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.881360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.881555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.881585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.881829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.881858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.881957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.881986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.882108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.882138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.882310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.882347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.882468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.882498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.882675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.882704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.882907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.882937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.883118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.883148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.883251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.883281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.883409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.883449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.883628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.883658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.883835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.883864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.884051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.884079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.884257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.884286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.884425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.884456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.884649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.719 [2024-07-15 18:42:31.884678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.719 qpair failed and we were unable to recover it. 00:27:46.719 [2024-07-15 18:42:31.884911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.884941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.885120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.885150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.885289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.885318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.885461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.885498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.885616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.885645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.885772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.885802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.885997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.886026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.886192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.886222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.886406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.886437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.886559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.886589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.886806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.886835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.886965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.886996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.887168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.887199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.887390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.887424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.887639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.887670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.887851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.887880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.888123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.888153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.888358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.888388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.888494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.888524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.888620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.888649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.888848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.888877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.889113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.889141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.889322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.889359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.889475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.889505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.889637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.889666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.889783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.889813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.889924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.889954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.890073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.890102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.890220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.890250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.890365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.890403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.890533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.890564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.890694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.890722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.890836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.890865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.891035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.891065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.891239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.891268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.891396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.891425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.891533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.891562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.891735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.891771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.893229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.893280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.893424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.893455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.893573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.720 [2024-07-15 18:42:31.893602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.720 qpair failed and we were unable to recover it. 00:27:46.720 [2024-07-15 18:42:31.893733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.893761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.894029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.894059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.894257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.894293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.894441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.894473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.894667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.894696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.894872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.894900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.895019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.895048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.895216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.895245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.895366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.895396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.895510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.895539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.895662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.895691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.895812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.895840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.896016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.896045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.896224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.896253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.896376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.896407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.896576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.896605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.896879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.896908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.897104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.897134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.897370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.897401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.897531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.897560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.897668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.897697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.897884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.897913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.898017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.898046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.898225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.898255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.898354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.898384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.898497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.898526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.898713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.898743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.898864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.898894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.899030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.899058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.899260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.899289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.899403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.899433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.899632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.899661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.899791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.899820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.899927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.899956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.900189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.900219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.900488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.721 [2024-07-15 18:42:31.900518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.721 qpair failed and we were unable to recover it. 00:27:46.721 [2024-07-15 18:42:31.900754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.900783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.900899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.900928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.901048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.901077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.901251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.901279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.901417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.901448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.901579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.901608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.901787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.901821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.902008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.902037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.902157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.902186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.902373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.902404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.902583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.902612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.902715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.902744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.902942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.902970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.903145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.903173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.903289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.903318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.903456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.903486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.903597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.903627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.903747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.903777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.903893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.903922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.904036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.904065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.904181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.904211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.904402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.904432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.904599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.904628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.904744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.904773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.904959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.904988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.905168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.905197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.905379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.905409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.905512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.905540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.905712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.905740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.905933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.905961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.906089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.906118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.906304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.906333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.906488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.906519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.906709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.906738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.906933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.906963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.907159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.907187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.907368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.907399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.907615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.907644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.907816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.907845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.907976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.908005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.722 [2024-07-15 18:42:31.908113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.722 [2024-07-15 18:42:31.908143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.722 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.908249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.908278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.908447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.908477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.908598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.908627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.908752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.908782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.908896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.908925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.909113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.909147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.909349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.909379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.909565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.909594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.909711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.909740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.909851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.909881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.909991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.910020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.910201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.910230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.910411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.910442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.910613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.910642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.910831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.910861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.910976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.911005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.911109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.911139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.911259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.911288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.911473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.911503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.911689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.911719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.911839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.911868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.912047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.912076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.912328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.912367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.912534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.912563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.912678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.912707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.912820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.912848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.913086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.913114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.913394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.913424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.913538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.913567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.913773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.913802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.913913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.913941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.914055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.914084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.914208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.914237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.914449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.914479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.914589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.914618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.914809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.914837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.914961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.914989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.915105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.915133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.915249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.915277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.915381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.915412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.915526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.915555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.915742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.915771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.915880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.723 [2024-07-15 18:42:31.915909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.723 qpair failed and we were unable to recover it. 00:27:46.723 [2024-07-15 18:42:31.916026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.916055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.916165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.916194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.916383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.916418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.916625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.916654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.916760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.916788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.916996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.917025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.917125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.917154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.917391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.917421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.917543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.917572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.917766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.917795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.917913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.917942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.918072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.918101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.918221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.918250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.918367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.918397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.918534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.918563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.918683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.918712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.918885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.918914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.919085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.919114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.919262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.919290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.919463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.919492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.919740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.919769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.919958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.919987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.920119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.920147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.920314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.920363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.920473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.920502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.920684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.920713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.920898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.920928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.921028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.921058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.921230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.921260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.921382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.921414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.921534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.921563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.921688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.921717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.921838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.921867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.921979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.922008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.922126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.922155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.922328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.922364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.922478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.922507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.922608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.922636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.922778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.922807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.922921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.922950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.923184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.923213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.923392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.923422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.923527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.923561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.923684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.923713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.724 qpair failed and we were unable to recover it. 00:27:46.724 [2024-07-15 18:42:31.923840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.724 [2024-07-15 18:42:31.923869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.924051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.924080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.924191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.924220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.924329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.924365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.924474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.924504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.924633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.924662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.924857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.924885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.924998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.925027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.925162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.925191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.925376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.925406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.925613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.925641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.925782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.925811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.925918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.925947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.926063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.926092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.926201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.926230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.926397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.926428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.926603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.926631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.926810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.926839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.926999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.927027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.927139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.927168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.927375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.927405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.927528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.927557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.927743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.927772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.927892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.927921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.928031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.928060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.928182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.928212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.928390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.928420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.928624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.928654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.928771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.928800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.928926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.928955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.929069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.929101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.929225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.929252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.929358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.929387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.929495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.929523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.929658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.929686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.929785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.929812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.929911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.929938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.930040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.930070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.930232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.930262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.725 [2024-07-15 18:42:31.930440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.725 [2024-07-15 18:42:31.930471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.725 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.930581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.930610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.930747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.930776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.930946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.930975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.931145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.931174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.931277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.931306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.931480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.931510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.931625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.931654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.931823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.931852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.931955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.931984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.932101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.932130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.932250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.932279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.932463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.932492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.932615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.932645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.932757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.932786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.932890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.932919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.933027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.933057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.933180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.933209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.933333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.933370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.933493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.933522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.933696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.933725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.933846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.933876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.933976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.934005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.934105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.934135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.934240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.934270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.934373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.934404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.934574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.934609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.934713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.934743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.934847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.934877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.935095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.935125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.935225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.935254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.935366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.935396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.935506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.935535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.935708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.935737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.935834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.935863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.936033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.936063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.936194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.936223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.936332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.936372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.936476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.936506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.936676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.936706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.936828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.936858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.936960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.936990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.937091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.937121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.937241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.937270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.937389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.937420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.937526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.726 [2024-07-15 18:42:31.937556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.726 qpair failed and we were unable to recover it. 00:27:46.726 [2024-07-15 18:42:31.937655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.937684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.937784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.937814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.938066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.938095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.938219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.938249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.938504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.938534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.938634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.938664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.938776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.938806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.938931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.938961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.939070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.939100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.939200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.939230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.939464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.939495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.939599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.939629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.939750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.939779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.939951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.939981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.940155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.940185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.940292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.940321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.940438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.940468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.940597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.940625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.940726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.940756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.940928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.940958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.941128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.941164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.941344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.941375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.941498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.941528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.941643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.941673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.941800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.941831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.941938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.941968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.942080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.942110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.942293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.942322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.942514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.942544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.942728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.942758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.942863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.942893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.943016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.943045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.943161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.943189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.943366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.943397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.943529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.943559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.943733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.943762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.943882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.943908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.944020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.944046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.944212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.944239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.944423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.944451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.944623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.944649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.944818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.944845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.944941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.944968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.945129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.945156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.945323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.945358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.945458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.945485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.945601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.945628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.727 [2024-07-15 18:42:31.945811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.727 [2024-07-15 18:42:31.945838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.727 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.946049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.946076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.946237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.946264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.946445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.946472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.946635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.946662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.946842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.946869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.947040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.947067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.947330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.947367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.947543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.947570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.947685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.947712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.947812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.947838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.947946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.947973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.948141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.948168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.948274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.948305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.948516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.948546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.948794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.948822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.948932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.948959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.949067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.949095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.949262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.949290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.949393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.949422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.949588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.949614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.949711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.949738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.949852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.949879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.949984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.950010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.950180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.950206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.950318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.950364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.950475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.950501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.950686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.950718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.950893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.950920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.951034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.951062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.951255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.951282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.951536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.951564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.951754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.951780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.951902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.951929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.952030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.952056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.952220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.952246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.952444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.952472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.952594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.952621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.952805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.952832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.952950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.952977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.953098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.953125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.953302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.953330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.953438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.953466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.953587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.953613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.953794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.953820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.953985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.954011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.954151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.954180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.954378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.954408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.728 qpair failed and we were unable to recover it. 00:27:46.728 [2024-07-15 18:42:31.954588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.728 [2024-07-15 18:42:31.954617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.954788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.954817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.954986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.955015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.955221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.955250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.955367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.955397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.955497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.955532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.955656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.955686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.955789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.955818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.955949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.955979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.956099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.956128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.956299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.956328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.956535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.956566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.956746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.956776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.956886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.956916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.957214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.957243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.957373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.957403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.957513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.957542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.957662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.957691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.957877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.957906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.958168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.958199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.958405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.958435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.958544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.958574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.958700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.958729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.958909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.958938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.959173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.959202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.959309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.959347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.959531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.959560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.959735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.959763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.960003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.960032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.960136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.960165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.960264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.960294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.960458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.960529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.960694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.960733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.960916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.960950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.961055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.961091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.961228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.961262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.961449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.961484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.961744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.961778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.961968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.961998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.962115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.962144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.962277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.962310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.962440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.729 [2024-07-15 18:42:31.962481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.729 qpair failed and we were unable to recover it. 00:27:46.729 [2024-07-15 18:42:31.962605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.730 [2024-07-15 18:42:31.962635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.730 qpair failed and we were unable to recover it. 00:27:46.730 [2024-07-15 18:42:31.962756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.730 [2024-07-15 18:42:31.962788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.730 qpair failed and we were unable to recover it. 00:27:46.730 [2024-07-15 18:42:31.964114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.730 [2024-07-15 18:42:31.964164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.730 qpair failed and we were unable to recover it. 00:27:46.730 [2024-07-15 18:42:31.964448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.730 [2024-07-15 18:42:31.964488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.730 qpair failed and we were unable to recover it. 00:27:46.730 [2024-07-15 18:42:31.965850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.730 [2024-07-15 18:42:31.965899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.730 qpair failed and we were unable to recover it. 00:27:46.730 [2024-07-15 18:42:31.966027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.730 [2024-07-15 18:42:31.966057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.730 qpair failed and we were unable to recover it. 00:27:46.730 [2024-07-15 18:42:31.966175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.730 [2024-07-15 18:42:31.966203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.730 qpair failed and we were unable to recover it. 00:27:46.730 [2024-07-15 18:42:31.966449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.730 [2024-07-15 18:42:31.966480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.730 qpair failed and we were unable to recover it. 00:27:46.730 [2024-07-15 18:42:31.966660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.730 [2024-07-15 18:42:31.966689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.730 qpair failed and we were unable to recover it. 00:27:46.730 [2024-07-15 18:42:31.966878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.730 [2024-07-15 18:42:31.966906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.730 qpair failed and we were unable to recover it. 00:27:46.730 [2024-07-15 18:42:31.967026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.730 [2024-07-15 18:42:31.967054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.730 qpair failed and we were unable to recover it. 00:27:46.730 [2024-07-15 18:42:31.967229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.730 [2024-07-15 18:42:31.967258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.730 qpair failed and we were unable to recover it. 00:27:46.730 [2024-07-15 18:42:31.967371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.730 [2024-07-15 18:42:31.967401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.730 qpair failed and we were unable to recover it. 00:27:46.730 [2024-07-15 18:42:31.967590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.730 [2024-07-15 18:42:31.967618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.730 qpair failed and we were unable to recover it. 00:27:46.730 [2024-07-15 18:42:31.967797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.730 [2024-07-15 18:42:31.967826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.730 qpair failed and we were unable to recover it. 00:27:46.730 [2024-07-15 18:42:31.967999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.730 [2024-07-15 18:42:31.968029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.730 qpair failed and we were unable to recover it. 00:27:46.730 [2024-07-15 18:42:31.968152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.730 [2024-07-15 18:42:31.968181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.730 qpair failed and we were unable to recover it. 00:27:46.730 [2024-07-15 18:42:31.968319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.730 [2024-07-15 18:42:31.968360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.730 qpair failed and we were unable to recover it. 00:27:46.730 [2024-07-15 18:42:31.968477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.730 [2024-07-15 18:42:31.968506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.730 qpair failed and we were unable to recover it. 00:27:46.730 [2024-07-15 18:42:31.968679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.730 [2024-07-15 18:42:31.968708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.730 qpair failed and we were unable to recover it. 00:27:46.730 [2024-07-15 18:42:31.968826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.730 [2024-07-15 18:42:31.968855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.730 qpair failed and we were unable to recover it. 00:27:46.730 [2024-07-15 18:42:31.968964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.730 [2024-07-15 18:42:31.968993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.730 qpair failed and we were unable to recover it. 00:27:46.730 [2024-07-15 18:42:31.969160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.730 [2024-07-15 18:42:31.969189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.730 qpair failed and we were unable to recover it. 00:27:46.730 [2024-07-15 18:42:31.969293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.730 [2024-07-15 18:42:31.969322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.730 qpair failed and we were unable to recover it. 00:27:46.730 [2024-07-15 18:42:31.969457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.730 [2024-07-15 18:42:31.969486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.730 qpair failed and we were unable to recover it. 00:27:46.730 [2024-07-15 18:42:31.969654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.730 [2024-07-15 18:42:31.969683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.730 qpair failed and we were unable to recover it. 00:27:46.730 [2024-07-15 18:42:31.969855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.730 [2024-07-15 18:42:31.969884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.730 qpair failed and we were unable to recover it. 00:27:46.730 [2024-07-15 18:42:31.970068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.730 [2024-07-15 18:42:31.970097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.730 qpair failed and we were unable to recover it. 00:27:46.730 [2024-07-15 18:42:31.970195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.730 [2024-07-15 18:42:31.970224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.730 qpair failed and we were unable to recover it. 00:27:46.730 [2024-07-15 18:42:31.970411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.730 [2024-07-15 18:42:31.970442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.730 qpair failed and we were unable to recover it. 00:27:46.730 [2024-07-15 18:42:31.970559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.730 [2024-07-15 18:42:31.970589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.730 qpair failed and we were unable to recover it. 00:27:46.730 [2024-07-15 18:42:31.970775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.970803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.970915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.970944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.971066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.971095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.971270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.971299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.971419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.971449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.971570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.971599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.971847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.971876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.971969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.971996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.972169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.972198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.972384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.972415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.972597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.972626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.972823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.972852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.972966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.973001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.973123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.973152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.973357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.973389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.973585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.973614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.973714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.973743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.973924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.973953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.974060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.974089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.974274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.974303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.974416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.974447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.974649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.974678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.974784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.974813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.974925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.974954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.975172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.975201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.975309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.975347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.975533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.975562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.975684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.975713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.975833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.975861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.975979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.976008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.976214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.976243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.976432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.976462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.976645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.976673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.976841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.976870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.976985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.977014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.977113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.977141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.977266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.977295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.977495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.977527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.977643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.977671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.977860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.977890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.731 [2024-07-15 18:42:31.978104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.731 [2024-07-15 18:42:31.978132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.731 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.978241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.978270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.978404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.978435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.978717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.978746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.978938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.978967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.979068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.979096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.979301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.979330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.979587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.979617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.979801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.979830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.979999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.980027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.980160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.980188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.980310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.980347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.980539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.980573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.980701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.980731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.980909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.980938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.981058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.981086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.981358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.981389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.981513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.981543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.981729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.981758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.981882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.981911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.982081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.982109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.982292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.982322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.982460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.982490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.982659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.982688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.982857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.982888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.983003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.983030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.983155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.983183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.983299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.983326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.983528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.983558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.983670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.983699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.983943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.983971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.984096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.984125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.984244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.984273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.984445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.984475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.984579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.984608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.984717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.984746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.984852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.984881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.984981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.985010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.985112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.985141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.985373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.985426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.985560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.985590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.732 [2024-07-15 18:42:31.985705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.732 [2024-07-15 18:42:31.985746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.732 qpair failed and we were unable to recover it. 00:27:46.733 [2024-07-15 18:42:31.985922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.733 [2024-07-15 18:42:31.985952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-07-15 18:42:31.986077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.733 [2024-07-15 18:42:31.986107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-07-15 18:42:31.986214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.733 [2024-07-15 18:42:31.986253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-07-15 18:42:31.986364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.733 [2024-07-15 18:42:31.986396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-07-15 18:42:31.986520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.733 [2024-07-15 18:42:31.986550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-07-15 18:42:31.986666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.733 [2024-07-15 18:42:31.986706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-07-15 18:42:31.986881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.733 [2024-07-15 18:42:31.986913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-07-15 18:42:31.987092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.733 [2024-07-15 18:42:31.987126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-07-15 18:42:31.987229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.733 [2024-07-15 18:42:31.987258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-07-15 18:42:31.987512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.733 [2024-07-15 18:42:31.987547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-07-15 18:42:31.987679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.733 [2024-07-15 18:42:31.987713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-07-15 18:42:31.987835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.733 [2024-07-15 18:42:31.987864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-07-15 18:42:31.987977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.733 [2024-07-15 18:42:31.988005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-07-15 18:42:31.988170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.733 [2024-07-15 18:42:31.988199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-07-15 18:42:31.988330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.733 [2024-07-15 18:42:31.988370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-07-15 18:42:31.988570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.733 [2024-07-15 18:42:31.988598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-07-15 18:42:31.988731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.733 [2024-07-15 18:42:31.988759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-07-15 18:42:31.988929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.733 [2024-07-15 18:42:31.988958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-07-15 18:42:31.989061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.733 [2024-07-15 18:42:31.989088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-07-15 18:42:31.989196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.733 [2024-07-15 18:42:31.989225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-07-15 18:42:31.989398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.733 [2024-07-15 18:42:31.989428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-07-15 18:42:31.989614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.733 [2024-07-15 18:42:31.989643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-07-15 18:42:31.989760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.733 [2024-07-15 18:42:31.989788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-07-15 18:42:31.989968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.733 [2024-07-15 18:42:31.989997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-07-15 18:42:31.990188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.733 [2024-07-15 18:42:31.990218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-07-15 18:42:31.990320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.733 [2024-07-15 18:42:31.990355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-07-15 18:42:31.990545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.733 [2024-07-15 18:42:31.990574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-07-15 18:42:31.990700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.733 [2024-07-15 18:42:31.990730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-07-15 18:42:31.990912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.733 [2024-07-15 18:42:31.990941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-07-15 18:42:31.991056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.733 [2024-07-15 18:42:31.991085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-07-15 18:42:31.991261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.733 [2024-07-15 18:42:31.991291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.733 qpair failed and we were unable to recover it. 00:27:46.733 [2024-07-15 18:42:31.991443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.991474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.991589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.991618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.991717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.991747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.991860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.991889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.992055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.992085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.992264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.992293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.992441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.992479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.992667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.992700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.992878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.992907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.993102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.993135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.993381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.993412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.993527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.993562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.993693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.993723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.993912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.993944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.994061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.994091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.994283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.994316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.994460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.994493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.994760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.994792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.994918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.994948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.995118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.995156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.995359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.995402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.995580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.995610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.995720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.995749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.995869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.995901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.996105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.996136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.996273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.996304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.996450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.996481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.996627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.996660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.996849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.996887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.997001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.997031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.997199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.997240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.997381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.997414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.997544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.997578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.997701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.997731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.997846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.997883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.998065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.998095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.998226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.998258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.998441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.998473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.998608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.998639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.998752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.998781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.734 [2024-07-15 18:42:31.998957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.734 [2024-07-15 18:42:31.998986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.734 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:31.999096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:31.999128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:31.999250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:31.999280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:31.999402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:31.999432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:31.999600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:31.999641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:31.999815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:31.999845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.000041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.000073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.000245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.000286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.000425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.000457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.000636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.000669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.000768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.000797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.000986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.001017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.001136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.001165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.001302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.001353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.001478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.001511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.001684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.001716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.001840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.001869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.001969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.002008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.002121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.002150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.002331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.002396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.002654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.002692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.002814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.002844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.002950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.002980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.003153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.003186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.003363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.003395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.003524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.003557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.003747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.003777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.003893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.003924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.004133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.004172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.004363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.004396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.004653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.004685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.004795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.004825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.004951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.004984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.005110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.005139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.005376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.005411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.005533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.005567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.005800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.005832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.006075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.006108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.006299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.006351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.006554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.006584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.006764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.006796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.735 qpair failed and we were unable to recover it. 00:27:46.735 [2024-07-15 18:42:32.006975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.735 [2024-07-15 18:42:32.007016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.007205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.007236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.007427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.007461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.007718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.007755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.008005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.008037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.008218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.008248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.008372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.008404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.008509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.008540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.008745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.008781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.008967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.008999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.009198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.009230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.009433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.009474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.009601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.009632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.009829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.009859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.010101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.010133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.010262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.010291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.010423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.010462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.010651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.010680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.010805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.010844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.010964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.010994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.011170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.011204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.011307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.011376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.011562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.011595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.011723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.011761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.012018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.012057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.012196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.012225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.012400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.012437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.012548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.012578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.012698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.012731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.012943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.012979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.013165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.013196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.013294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.013324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.013467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.013507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.013687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.013717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.013838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.013868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.013985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.014017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.014146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.014178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.014288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.014320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.014441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.014472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.014569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.014607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.014737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.736 [2024-07-15 18:42:32.014767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.736 qpair failed and we were unable to recover it. 00:27:46.736 [2024-07-15 18:42:32.015047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.015079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.015179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.015217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.015507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.015539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.015727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.015760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.015896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.015927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.016112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.016142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.016259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.016298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.016489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.016530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.016651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.016681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.016807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.016842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.016971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.017001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.017130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.017160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.017356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.017390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.017627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.017656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.017839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.017871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.017977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.018007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.018226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.018259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.018502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.018542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.018659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.018693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.018879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.018910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.019009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.019039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.019224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.019256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.019375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.019408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.019540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.019570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.019704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.019736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.019865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.019894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.020017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.020048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.020149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.020178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.020297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.020329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.020473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.020504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.020614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.020646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.020837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.020876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.021011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.021042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.021155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.021184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.021360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.021398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.021533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.021563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.021740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.737 [2024-07-15 18:42:32.021769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.737 qpair failed and we were unable to recover it. 00:27:46.737 [2024-07-15 18:42:32.021871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.021903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.022081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.022113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.022222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.022254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.022367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.022398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.022519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.022557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.022688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.022718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.022839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.022869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.022981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.023014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.023125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.023153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.023257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.023286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.023540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.023575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.023692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.023722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.023860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.023894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.023991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.024022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.024140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.024171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.024299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.024327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.024487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.024520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.024625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.024655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.024776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.024808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.024980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.025018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.025271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.025309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.025446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.025478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.025596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.025628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.025750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.025780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.025955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.025987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.026185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.026217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.026351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.026385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.026576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.026609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.026739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.026771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.026882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.026912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.027109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.027142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.027248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.027288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.027412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.027442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.027569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.027607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.027792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.027822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.027926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.027957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.028095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.028131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.028304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.028333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.028483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.028514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.738 qpair failed and we were unable to recover it. 00:27:46.738 [2024-07-15 18:42:32.028646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.738 [2024-07-15 18:42:32.028688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.028887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.028923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.029045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.029074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.029184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.029223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.029362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.029394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.029525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.029557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.029722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.029753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.030014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.030046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.030160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.030195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.030302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.030356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.030475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.030504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.030675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.030705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.030812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.030844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.031037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.031069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.031183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.031221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.031413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.031444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.031633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.031665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.031783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.031813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.032001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.032033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.032160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.032189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.032382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.032416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.032533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.032563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.032670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.032700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.032874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.032907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.033019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.033049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.033164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.033204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.033331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.033376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.033638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.033670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.033791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.033820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.033988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.034021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.034203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.034232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.034381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.034415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.034613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.034646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.034772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.034802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.034904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.034934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.035063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.035096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.035279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.035308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.035467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.035503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.035690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.035722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.035849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.035880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.035988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.036017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.739 qpair failed and we were unable to recover it. 00:27:46.739 [2024-07-15 18:42:32.036202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.739 [2024-07-15 18:42:32.036234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.036353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.036390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.036567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.036599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.036772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.036804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.036995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.037027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.037204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.037233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.037355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.037397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.037512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.037547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.037805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.037835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.038055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.038088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.038204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.038244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.038446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.038480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.038652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.038685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.038819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.038849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.039033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.039065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.039178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.039208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.039322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.039365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.039609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.039641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.039902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.039934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.040119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.040150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.040377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.040412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.040626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.040662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.040852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.040882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.041002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.041034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.041234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.041263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.041438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.041472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.041682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.041721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.041905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.041943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.042158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.042189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.042369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.042400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.042530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.042561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.042843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.042872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.043005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.043035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.043228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.043257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.043525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.043556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.043686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.043716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.043900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.043929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.044100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.044130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.044258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.044288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.044473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.044504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.044693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.740 [2024-07-15 18:42:32.044723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.740 qpair failed and we were unable to recover it. 00:27:46.740 [2024-07-15 18:42:32.044907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.044937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.045052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.045081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.045257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.045286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.045515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.045548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.045669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.045699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.045809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.045839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.045972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.046007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.046287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.046317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.046453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.046485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.046584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.046613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.046727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.046757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.046869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.046899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.047194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.047224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.047362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.047393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.047523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.047553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.047734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.047764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.047884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.047914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.048036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.048065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.048172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.048202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.048315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.048356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.048551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.048581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.048699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.048728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.048832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.048862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.048971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.049001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.049192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.049222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.049380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.049411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.049579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.049609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.049785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.049815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.049948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.049977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.050134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.050164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.050278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.050307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.050506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.050537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.050709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.050739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.050863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.050893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.051127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.051157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.051285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.051315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.051450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.051481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.051748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.051778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.051962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.051992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.052126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.052156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.052274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.741 [2024-07-15 18:42:32.052304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.741 qpair failed and we were unable to recover it. 00:27:46.741 [2024-07-15 18:42:32.052502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-07-15 18:42:32.052533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-07-15 18:42:32.052708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-07-15 18:42:32.052738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-07-15 18:42:32.052904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-07-15 18:42:32.052933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-07-15 18:42:32.053060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-07-15 18:42:32.053090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-07-15 18:42:32.053217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-07-15 18:42:32.053246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-07-15 18:42:32.053367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-07-15 18:42:32.053405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-07-15 18:42:32.053592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-07-15 18:42:32.053621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-07-15 18:42:32.053812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-07-15 18:42:32.053841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-07-15 18:42:32.054021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-07-15 18:42:32.054052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-07-15 18:42:32.054167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-07-15 18:42:32.054196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-07-15 18:42:32.054320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-07-15 18:42:32.054361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-07-15 18:42:32.054550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-07-15 18:42:32.054580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-07-15 18:42:32.054779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-07-15 18:42:32.054807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-07-15 18:42:32.054979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-07-15 18:42:32.055008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-07-15 18:42:32.055201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-07-15 18:42:32.055229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-07-15 18:42:32.055358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-07-15 18:42:32.055389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-07-15 18:42:32.055569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-07-15 18:42:32.055598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-07-15 18:42:32.055787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-07-15 18:42:32.055816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-07-15 18:42:32.055984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-07-15 18:42:32.056013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-07-15 18:42:32.056203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-07-15 18:42:32.056233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-07-15 18:42:32.056357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-07-15 18:42:32.056388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-07-15 18:42:32.056491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-07-15 18:42:32.056521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-07-15 18:42:32.056710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-07-15 18:42:32.056739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-07-15 18:42:32.056946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-07-15 18:42:32.056975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-07-15 18:42:32.057105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-07-15 18:42:32.057134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-07-15 18:42:32.057234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-07-15 18:42:32.057263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-07-15 18:42:32.057464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-07-15 18:42:32.057496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-07-15 18:42:32.057782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-07-15 18:42:32.057811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-07-15 18:42:32.057936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-07-15 18:42:32.057965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-07-15 18:42:32.058133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-07-15 18:42:32.058162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-07-15 18:42:32.058261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-07-15 18:42:32.058290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-07-15 18:42:32.058425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.742 [2024-07-15 18:42:32.058456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.742 qpair failed and we were unable to recover it. 00:27:46.742 [2024-07-15 18:42:32.058643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.058673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.058857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.058887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.059002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.059031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.059286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.059315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.059502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.059532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.059652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.059681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.059869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.059898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.060081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.060110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.060222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.060250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.060373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.060404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.060579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.060607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.060711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.060740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.060844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.060873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.061105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.061139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.061263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.061292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.061486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.061519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.061722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.061751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.061859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.061888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.062056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.062084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.062186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.062215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.062403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.062434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.062560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.062590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.062710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.062741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.062864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.062893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.063001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.063031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.063144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.063173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.063372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.063403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.063689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.063719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.063900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.063929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.064123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.064152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.064275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.064304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.064442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.064474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.064647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.064677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.064867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.064897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.065070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.065099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.065238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.065267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.065456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.065488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.065685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.065714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.065909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.065939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.066203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.066232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.066366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.743 [2024-07-15 18:42:32.066397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.743 qpair failed and we were unable to recover it. 00:27:46.743 [2024-07-15 18:42:32.066619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.066649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.066827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.066856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.066961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.066991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.067181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.067212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.067392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.067422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.067592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.067622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.067754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.067785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.067887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.067917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.068016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.068046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.068305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.068335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.068461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.068492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.068604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.068635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.068813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.068848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.068956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.068986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.069174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.069204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.069311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.069349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.069456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.069486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.069585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.069615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.069845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.069875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.069997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.070027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.070151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.070182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.070302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.070332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.070540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.070570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.070674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.070704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.070872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.070901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.071072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.071102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.071223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.071252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.071381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.071411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.071595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.071625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.071836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.071866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.071969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.071998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.072104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.072134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.072266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.072296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.072412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.072442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.072612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.072642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.072819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.072849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.072978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.073007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.073123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.073152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.073266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.073296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.073420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.073450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.073556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.744 [2024-07-15 18:42:32.073586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.744 qpair failed and we were unable to recover it. 00:27:46.744 [2024-07-15 18:42:32.073715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.073745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.073861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.073890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.074060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.074090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.074256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.074285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.074528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.074559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.074726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.074756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.074946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.074975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.075091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.075121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.075288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.075317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.075456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.075487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.075596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.075626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.075810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.075845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.076030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.076060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.076184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.076213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.076392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.076424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.076541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.076570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.076672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.076701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.076874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.076904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.077013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.077043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.077220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.077250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.077423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.077454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.077585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.077614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.077729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.077758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.077926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.077955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.078064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.078093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.078201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.078231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.078364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.078394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.078527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.078556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.078734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.078764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.078861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.078890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.079060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.079089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.079324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.079363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.079463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.079493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.079730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.079760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.079879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.079909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.080091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.080121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.080249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.080279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.080394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.080425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.080609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.080639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.080823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.080853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.745 [2024-07-15 18:42:32.080964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.745 [2024-07-15 18:42:32.080994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.745 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.081103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.081132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.081245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.081276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.081396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.081427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.081539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.081568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.081690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.081719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.081978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.082008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.082126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.082156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.082285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.082314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.082428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.082459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.082642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.082672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.082839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.082873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.082995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.083025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.083202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.083231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.083398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.083428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.083534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.083563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.083678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.083708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.083903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.083933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.084044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.084073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.084190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.084220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.084405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.084436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.084563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.084593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.084705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.084734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.084839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.084868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.085054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.085084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.085261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.085291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.085478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.085509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.085630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.085660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.085833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.085863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.086041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.086070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.086185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.086215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.086332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.086371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.086550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.086580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.086698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.086729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.086900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.086931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.087103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.087133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.746 [2024-07-15 18:42:32.087322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.746 [2024-07-15 18:42:32.087362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.746 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.087545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.087576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.087756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.087788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.087900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.087929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.088095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.088125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.088240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.088270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.088401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.088432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.088614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.088644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.088838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.088868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.089120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.089150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.089283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.089313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.089429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.089460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.089563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.089593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.089830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.089861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.090026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.090056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.090176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.090210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.090334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.090371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.090485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.090516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.090612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.090642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.090841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.090871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.090987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.091016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.091139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.091168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.091299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.091329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.091512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.091541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.091642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.091672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.091773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.091803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.091924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.091953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.092062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.092092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.092193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.092222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.092404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.092435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.092557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.092586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.092769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.092799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.092904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.092934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.093120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.093150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.093269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.093298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.093415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.093446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.093553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.093583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.093761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.093791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.093911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.093941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.094054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.094084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.094205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.094234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.747 [2024-07-15 18:42:32.094418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.747 [2024-07-15 18:42:32.094449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.747 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.094554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.094585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.094754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.094783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.094958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.094987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.095153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.095182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.095354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.095402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.095613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.095642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.095813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.095842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.095964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.095993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.096165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.096195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.096325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.096390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.096566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.096596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.096775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.096805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.096979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.097009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.097126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.097161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.097279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.097308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.097419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.097451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.097587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.097617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.097787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.097817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.097936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.097966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.098064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.098093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.098206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.098236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.098358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.098390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.098558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.098588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.098753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.098782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.098953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.098983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.099266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.099296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.099423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.099454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.099656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.099686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.099859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.099888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.100122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.100152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.100265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.100294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.100470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.100500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.100622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.100652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.100833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.100862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.101124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.101153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.101276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.101304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.101465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.101495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.101670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.101700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.101879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.101909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.102096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.102125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.748 qpair failed and we were unable to recover it. 00:27:46.748 [2024-07-15 18:42:32.102300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.748 [2024-07-15 18:42:32.102335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.102551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.102581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.102699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.102729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.102963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.102993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.103249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.103279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.103479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.103510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.103714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.103743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.103977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.104007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.104189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.104219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.104323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.104362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.104532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.104562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.104739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.104769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.104906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.104935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.105118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.105152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.105254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.105287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.105410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.105441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.105617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.105646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.105842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.105871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.106041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.106070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.106236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.106266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.106391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.106422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.106604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.106632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.106754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.106784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.106951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.106980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.107096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.107126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.107226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.107256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.107373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.107404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.107593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.107622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.107748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.107778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.107889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.107919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.108038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.108067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.108325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.108362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.108484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.108513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.108622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.108652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.108768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.108798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.108931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.108960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.109136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.109166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.109279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.109308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.109494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.109524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.109710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.109740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.109847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.109877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.749 [2024-07-15 18:42:32.110007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.749 [2024-07-15 18:42:32.110036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.749 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.110278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.110308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.110430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.110461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.110577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.110606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.110804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.110833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.111029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.111058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.111189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.111220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.111403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.111433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.111547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.111576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.111752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.111781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.111907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.111936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.112037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.112066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.112186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.112220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.112323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.112361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.112528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.112558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.112675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.112704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.112807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.112836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.113047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.113077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.113316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.113351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.113455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.113484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.113654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.113684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.113811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.113842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.114096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.114126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.114291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.114321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.114477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.114507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.114618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.114647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.114855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.114885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.115004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.115034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.115134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.115162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.115263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.115292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.115432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.115462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.115578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.115607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.115722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.115751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.115924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.115953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.116144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.116173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.116293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.116322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.750 [2024-07-15 18:42:32.116511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.750 [2024-07-15 18:42:32.116542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.750 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.116662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.116692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.116809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.116839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.116948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.116978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.117085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.117115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.117234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.117263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.117456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.117487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.117587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.117617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.117727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.117756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.117940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.117970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.118136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.118165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.118281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.118311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.118490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.118520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.118645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.118674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.118847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.118877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.119007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.119036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.119273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.119308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.119415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.119445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.119559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.119589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.119687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.119717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.119816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.119846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.120027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.120057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.120235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.120264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.120364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.120394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.120499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.120529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.120628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.120657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.120851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.120881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.121052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.121082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.121208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.121237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.121432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.121462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.121643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.121673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.121796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.121826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.121923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.121954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.122053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.122081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.122188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.122217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.122334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.122382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.122508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.122536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.122704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.122732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.122838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.122867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.122994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.123023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.123144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.123172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.751 qpair failed and we were unable to recover it. 00:27:46.751 [2024-07-15 18:42:32.123277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.751 [2024-07-15 18:42:32.123307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.123435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.123465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.123657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.123696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.123794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.123824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.124019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.124049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.124171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.124201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.124312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.124351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.124470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.124499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.124665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.124695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.124799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.124828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.125047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.125077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.125182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.125211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.125331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.125370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.125546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.125575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.125678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.125707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.125874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.125904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.126042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.126071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.126176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.126205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.126391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.126422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.126540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.126569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.126680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.126709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.126820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.126849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.127014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.127043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.127147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.127177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.127387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.127418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.127532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.127562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.127680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.127709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.127831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.127860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.127962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.127991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.128196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.128225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.128324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.128360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.128476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.128506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.128680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.128710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.128970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.128999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.129174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.129203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.129329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.129372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.129481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.129512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.129636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.129666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.129771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.129800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.129905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.129935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.130078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.130114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.752 qpair failed and we were unable to recover it. 00:27:46.752 [2024-07-15 18:42:32.130228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.752 [2024-07-15 18:42:32.130261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.130396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.130435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.130605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.130635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.130809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.130841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.130958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.130987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.131106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.131136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.131253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.131288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.131417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.131453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.131590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.131619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.131723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.131753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.131857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.131890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.132016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.132046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.132283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.132313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.132532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.132565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.132690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.132720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.132896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.132929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.133099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.133127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.133250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.133279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.133534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.133568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.133675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.133704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.133824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.133858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.134142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.134172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.134364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.134398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.134532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.134562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.134768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.134797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.134995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.135027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.135146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.135176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.135287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.135322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.135478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.135511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.135769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.135798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.135996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.136029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.136130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.136158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.136269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.136303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.136497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.136529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.136698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.136728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.136847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.136880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.136983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.137012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.753 qpair failed and we were unable to recover it. 00:27:46.753 [2024-07-15 18:42:32.137180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.753 [2024-07-15 18:42:32.137210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.137384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.137418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.137549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.137578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.137707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.137737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.137919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.137957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.138077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.138107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.138213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.138242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.138383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.138418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.138608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.138637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.138819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.138857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.138987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.139017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.139118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.139146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.139331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.139377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.139503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.139534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.139810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.139850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.140041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.140070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.140305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.140367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.140555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.140586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.140697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.140727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.140845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.140878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.141080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.141110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.141215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.141245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.141475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.141509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.141622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.141652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.141839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.141872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.142050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.142080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.142259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.142299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.142490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.142524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.142706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.142735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.142929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.142962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.143084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.143114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.143314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.143367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.143556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.143596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.143791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.143821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.143993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.144025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.144231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.144271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.144543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.144585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.144723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.144755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.144956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.144989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.145187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.145219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.145371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.145405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.754 qpair failed and we were unable to recover it. 00:27:46.754 [2024-07-15 18:42:32.145586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.754 [2024-07-15 18:42:32.145623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.145831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.145862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.146059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.146092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.146204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.146241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.146381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.146419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.146550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.146578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.146785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.146817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.147007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.147039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.147179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.147208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.147394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.147427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.147551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.147580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.147854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.147885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.148086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.148119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.148235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.148265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.148431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.148464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.148660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.148690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.148796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.148826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.149014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.149055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.149166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.149195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.149437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.149472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.149712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.149745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.149982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.150014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.150254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.150290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.150512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.150545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.150669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.150700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.150901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.150934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.151142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.151178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.151395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.151430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.151551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.151581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.151777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.151811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.152061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.152092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.152200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.152232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.152405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.152437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.152561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.152591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.152795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.152827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.755 qpair failed and we were unable to recover it. 00:27:46.755 [2024-07-15 18:42:32.152935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.755 [2024-07-15 18:42:32.152966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.153102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.153133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.153245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.153275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.153470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.153504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.153681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.153713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.153829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.153861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.154063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.154095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.154261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.154290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.154419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.154467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.154663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.154694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.154795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.154824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.155006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.155039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.155153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.155188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.155367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.155401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.155590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.155623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.155724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.155759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.155897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.155929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.156120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.156149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.156319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.156366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.156482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.156512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.156693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.156728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.156904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.156935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.157224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.157261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.157450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.157485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.157616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.157652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.157781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.157811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.157947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.157981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.158171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.158203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.158350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.158382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.158585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.158617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.158719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.158748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.158877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.158914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.159036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.159065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.159234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.159264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.159446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.159480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.159671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.159702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.159947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.159980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.160082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.160112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.160296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.160326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.160545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.160578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.160768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.756 [2024-07-15 18:42:32.160799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.756 qpair failed and we were unable to recover it. 00:27:46.756 [2024-07-15 18:42:32.160914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.160951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.161069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.161100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.161276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.161306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.161498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.161532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.161659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.161688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.161801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.161830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.162015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.162047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.162150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.162190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.162373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.162413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.162602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.162631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.162799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.162834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.162961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.162990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.163159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.163188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.163359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.163396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.163503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.163532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.163719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.163749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.163936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.163967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.164146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.164176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.164358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.164392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.164567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.164596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.164780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.164812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.164999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.165029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.165155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.165189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.165314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.165356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.165539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.165568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.165683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.165715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.165838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.165868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.166121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.166155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.166360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.166392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.166566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.166603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.166747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.166777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.166880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.166909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.167104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.167142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.167259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.167289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.167418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.167450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.167623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.167656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.167784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.167814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.167922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.167952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.168087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.168119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.168234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.168264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.168389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.757 [2024-07-15 18:42:32.168422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.757 qpair failed and we were unable to recover it. 00:27:46.757 [2024-07-15 18:42:32.168605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.168639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.168748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.168778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.168905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.168942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.169114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.169144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.169323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.169369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.169509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.169539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.169680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.169719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.169905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.169944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.170056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.170087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.170204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.170234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.170417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.170451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.170658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.170691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.170859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.170888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.171058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.171090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.171214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.171243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.171425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.171455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.171640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.171672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.171772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.171801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.171916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.171945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.172152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.172185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.172380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.172415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.172551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.172582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.172772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.172804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.173005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.173037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.173174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.173206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.173446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.173477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.173613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.173647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.173770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.173800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.173914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.173943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.174058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.174094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.174202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.174230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.174402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.174446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.174572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.174604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.174867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.174900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.175030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.175061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.175256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.175289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.175484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.175518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.175646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.175676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.175813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.175848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.176038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.176069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.176241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.758 [2024-07-15 18:42:32.176273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.758 qpair failed and we were unable to recover it. 00:27:46.758 [2024-07-15 18:42:32.176445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.176476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.176678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.176710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.176879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.176908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.177023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.177052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.177226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.177259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.177377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.177414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.177527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.177556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.177699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.177732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.177917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.177954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.178142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.178174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.178302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.178354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.178469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.178500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.178753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.178786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.178898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.178935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.179116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.179147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.179255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.179284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.179438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.179472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.179575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.179605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.179718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.179747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.179874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.179905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.180185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.180218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.180355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.180386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.180557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.180590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.180775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.180805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.180932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.180970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.181079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.181108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.181214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.181244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.181430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.181467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.181649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.181680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.181950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.181982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.182107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.182140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.182247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.182277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.182422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.182458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.182653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.182687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.182871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.182901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.183096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.759 [2024-07-15 18:42:32.183129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.759 qpair failed and we were unable to recover it. 00:27:46.759 [2024-07-15 18:42:32.183309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.183359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.183482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.183512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.183753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.183786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.183903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.183935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.184154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.184190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.184334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.184399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.184512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.184548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.184686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.184717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.184888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.184924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.185108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.185155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.185292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.185323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.185537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.185570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.185702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.185732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.185917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.185949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.186192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.186229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.186414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.186445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.186564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.186594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.186772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.186804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.186927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.186957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.187157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.187191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.187324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.187370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.187510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.187541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.187725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.187757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.187888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.187919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.188132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.188164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.188285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.188317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.188477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.188510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.188691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.188723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.188857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.188889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.189074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.189106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.189330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.189377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.189512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.189545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.189809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.189842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.190023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.190059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.190192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.190228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.190493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.190533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.190666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.190698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.190871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.190904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.191081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.191114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.191229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.191259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.760 [2024-07-15 18:42:32.191474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.760 [2024-07-15 18:42:32.191508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.760 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.191701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.191736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.191851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.191881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.192091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.192123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.192335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.192383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.192503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.192539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.192735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.192773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.192957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.192988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.193129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.193172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.193382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.193419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.193595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.193625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.193862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.193890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.194063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.194093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.194223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.194253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.194446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.194477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.194593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.194622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.194797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.194827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.195042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.195071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.195260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.195289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.195480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.195512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.195638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.195667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.195778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.195807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.196043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.196072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.196197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.196227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.196399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.196431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.196541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.196570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.196676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.196705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.196894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.196923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.197097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.197126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.197239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.197268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.197434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.197463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.197645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.197673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.197787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.197814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.197934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.197962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.198071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.198099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.198225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.198253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.198437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.198469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.198592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.198622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.198736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.198765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.198876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.198905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.199009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.199039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.761 [2024-07-15 18:42:32.199159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.761 [2024-07-15 18:42:32.199188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.761 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.199373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.199424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.199546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.199576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.199752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.199782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.199960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.199989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.200156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.200185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.200309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.200353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.200617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.200647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.200817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.200856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.200973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.201002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.201173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.201202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.201335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.201393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.201503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.201532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.201712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.201741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.201857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.201886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.202069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.202098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.202210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.202241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.202481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.202513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.202751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.202780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.202975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.203004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.203126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.203155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.203392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.203423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.203551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.203581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.203765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.203794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.203906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.203934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.204112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.204140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.204306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.204335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.204538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.204568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.204808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.204837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.205020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.205049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.205164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.205193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.205464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.205495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.205595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.205625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.205803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.205832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.205959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.205988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.206172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.206202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.206322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.206389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.206508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.206538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.206816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.206845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.207047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.207076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.207192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.207221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.762 [2024-07-15 18:42:32.207325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.762 [2024-07-15 18:42:32.207370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.762 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.207482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.207512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.207681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.207711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.207901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.207931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.208118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.208148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.208258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.208287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.208477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.208508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.208693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.208728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.208921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.208951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.209120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.209150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.209327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.209371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.209494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.209524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.209641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.209671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.209846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.209876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.210070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.210099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.210222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.210251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.210419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.210450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.210628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.210657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.210826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.210855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.210968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.210997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.211111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.211141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.211312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.211354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.211567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.211597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.211784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.211813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.211948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.211978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.212149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.212179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.212310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.212351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.212538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.212568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.212686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.212716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.212893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.212922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.213031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.213061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.213256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.213295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.213492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.213522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.213780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.213809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.213920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.213950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.214133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.763 [2024-07-15 18:42:32.214162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.763 qpair failed and we were unable to recover it. 00:27:46.763 [2024-07-15 18:42:32.214405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-07-15 18:42:32.214436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-07-15 18:42:32.214623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-07-15 18:42:32.214654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-07-15 18:42:32.214768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-07-15 18:42:32.214798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-07-15 18:42:32.214906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-07-15 18:42:32.214935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-07-15 18:42:32.215055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-07-15 18:42:32.215084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-07-15 18:42:32.215262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-07-15 18:42:32.215292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-07-15 18:42:32.215491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-07-15 18:42:32.215531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-07-15 18:42:32.215717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-07-15 18:42:32.215747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-07-15 18:42:32.215927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-07-15 18:42:32.215955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-07-15 18:42:32.216124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-07-15 18:42:32.216153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-07-15 18:42:32.216397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-07-15 18:42:32.216429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-07-15 18:42:32.216603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-07-15 18:42:32.216641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-07-15 18:42:32.216821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-07-15 18:42:32.216850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-07-15 18:42:32.216970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-07-15 18:42:32.216998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-07-15 18:42:32.217181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-07-15 18:42:32.217210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-07-15 18:42:32.217320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-07-15 18:42:32.217364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-07-15 18:42:32.217542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-07-15 18:42:32.217576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-07-15 18:42:32.217842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-07-15 18:42:32.217872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:46.764 [2024-07-15 18:42:32.217978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.764 [2024-07-15 18:42:32.218007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:46.764 qpair failed and we were unable to recover it. 00:27:47.045 [2024-07-15 18:42:32.218122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.045 [2024-07-15 18:42:32.218151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.045 qpair failed and we were unable to recover it. 00:27:47.045 [2024-07-15 18:42:32.218278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.045 [2024-07-15 18:42:32.218308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.045 qpair failed and we were unable to recover it. 00:27:47.045 [2024-07-15 18:42:32.218508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.045 [2024-07-15 18:42:32.218538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.045 qpair failed and we were unable to recover it. 00:27:47.045 [2024-07-15 18:42:32.218657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.045 [2024-07-15 18:42:32.218687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.045 qpair failed and we were unable to recover it. 00:27:47.045 [2024-07-15 18:42:32.218789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.045 [2024-07-15 18:42:32.218818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.045 qpair failed and we were unable to recover it. 00:27:47.045 [2024-07-15 18:42:32.219017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.045 [2024-07-15 18:42:32.219047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.045 qpair failed and we were unable to recover it. 00:27:47.045 [2024-07-15 18:42:32.219165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.045 [2024-07-15 18:42:32.219195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.045 qpair failed and we were unable to recover it. 00:27:47.045 [2024-07-15 18:42:32.219388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.045 [2024-07-15 18:42:32.219420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.045 qpair failed and we were unable to recover it. 00:27:47.045 [2024-07-15 18:42:32.219530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.045 [2024-07-15 18:42:32.219560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.045 qpair failed and we were unable to recover it. 00:27:47.045 [2024-07-15 18:42:32.219739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.045 [2024-07-15 18:42:32.219770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.045 qpair failed and we were unable to recover it. 00:27:47.045 [2024-07-15 18:42:32.219894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.045 [2024-07-15 18:42:32.219924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.045 qpair failed and we were unable to recover it. 00:27:47.045 [2024-07-15 18:42:32.220098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.045 [2024-07-15 18:42:32.220128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.045 qpair failed and we were unable to recover it. 00:27:47.045 [2024-07-15 18:42:32.220238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.045 [2024-07-15 18:42:32.220268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.045 qpair failed and we were unable to recover it. 00:27:47.045 [2024-07-15 18:42:32.220383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.045 [2024-07-15 18:42:32.220414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.045 qpair failed and we were unable to recover it. 00:27:47.045 [2024-07-15 18:42:32.220539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.045 [2024-07-15 18:42:32.220572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.045 qpair failed and we were unable to recover it. 00:27:47.045 [2024-07-15 18:42:32.220681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.045 [2024-07-15 18:42:32.220712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.045 qpair failed and we were unable to recover it. 00:27:47.045 [2024-07-15 18:42:32.220834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.045 [2024-07-15 18:42:32.220868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.045 qpair failed and we were unable to recover it. 00:27:47.045 [2024-07-15 18:42:32.221058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.045 [2024-07-15 18:42:32.221087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.045 qpair failed and we were unable to recover it. 00:27:47.045 [2024-07-15 18:42:32.221257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.045 [2024-07-15 18:42:32.221290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.045 qpair failed and we were unable to recover it. 00:27:47.045 [2024-07-15 18:42:32.221478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.045 [2024-07-15 18:42:32.221527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.045 qpair failed and we were unable to recover it. 00:27:47.045 [2024-07-15 18:42:32.221634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.045 [2024-07-15 18:42:32.221664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.045 qpair failed and we were unable to recover it. 00:27:47.045 [2024-07-15 18:42:32.221857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.045 [2024-07-15 18:42:32.221896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.045 qpair failed and we were unable to recover it. 00:27:47.045 [2024-07-15 18:42:32.222034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.045 [2024-07-15 18:42:32.222066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.045 qpair failed and we were unable to recover it. 00:27:47.045 [2024-07-15 18:42:32.222251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.045 [2024-07-15 18:42:32.222284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.045 qpair failed and we were unable to recover it. 00:27:47.045 [2024-07-15 18:42:32.222405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.045 [2024-07-15 18:42:32.222436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.045 qpair failed and we were unable to recover it. 00:27:47.045 [2024-07-15 18:42:32.222611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.045 [2024-07-15 18:42:32.222641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.045 qpair failed and we were unable to recover it. 00:27:47.045 [2024-07-15 18:42:32.222762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.045 [2024-07-15 18:42:32.222792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.045 qpair failed and we were unable to recover it. 00:27:47.045 [2024-07-15 18:42:32.222961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.045 [2024-07-15 18:42:32.222990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.045 qpair failed and we were unable to recover it. 00:27:47.045 [2024-07-15 18:42:32.223173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.223203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.223375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.223404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.223645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.223675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.223861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.223892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.223993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.224026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.224152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.224181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.224422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.224455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.224564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.224594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.224797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.224830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.224945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.224975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.225084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.225122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.225239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.225270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.225399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.225429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.225549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.225580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.225702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.225735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.225857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.225887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.226060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.226095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.226202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.226237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.226433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.226465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.226587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.226620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.226803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.226833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.227010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.227040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.227169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.227200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.227314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.227355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.227481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.227513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.227720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.227756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.227858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.227888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.228096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.228129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.228260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.228289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.228429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.228460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.228591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.228623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.228739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.228774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.228906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.228938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.229045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.229074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.229299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.046 [2024-07-15 18:42:32.229331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.046 qpair failed and we were unable to recover it. 00:27:47.046 [2024-07-15 18:42:32.229469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.229507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.229734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.229764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.229881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.229910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.230110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.230140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.230374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.230404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.230572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.230601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.230730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.230759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.230954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.230983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.231103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.231133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.231308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.231346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.231488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.231519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.231635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.231665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.231897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.231927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.232039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.232069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.232181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.232210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.232458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.232488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.232602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.232631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.232764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.232794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.232905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.232934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.233102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.233131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.233248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.233278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.233446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.233476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.233652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.233682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.233853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.233883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.233995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.234025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.234196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.234226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.234329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.234393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.234500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.234530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.234729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.234758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.234945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.234974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.235092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.235122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.235225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.235255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.235447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.235478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.235602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.235632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.235751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.235780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.047 [2024-07-15 18:42:32.235880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.047 [2024-07-15 18:42:32.235910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.047 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.236013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.236048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.236213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.236243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.236353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.236384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.236501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.236531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.236701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.236731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.236969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.236999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.237163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.237193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.237315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.237351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.237535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.237565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.237676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.237706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.237807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.237837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.238006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.238035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.238136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.238165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.238380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.238412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.238611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.238641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.238874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.238904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.239036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.239065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.239173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.239202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.239325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.239388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.239565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.239595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.239702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.239731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.239928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.239957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.240078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.240108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.240296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.240326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.240524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.240555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.240674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.240703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.240885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.240915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.241032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.241063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.241248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.241278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.241449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.241480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.241659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.241689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.241799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.241828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.241936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.241966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.242137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.242166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.242287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.242316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.242494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.242524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.048 qpair failed and we were unable to recover it. 00:27:47.048 [2024-07-15 18:42:32.242630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.048 [2024-07-15 18:42:32.242659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.242765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.242795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.243060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.243090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.243196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.243226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.243417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.243453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.243560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.243590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.243828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.243858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.244055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.244085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.244216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.244245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.244362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.244392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.244635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.244665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.244783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.244813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.244923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.244953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.245187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.245218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.245387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.245417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.246936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.246987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.247251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.247283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.247470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.247501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.247688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.247718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.247896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.247925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.248157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.248186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.248327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.248367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.248484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.248518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.248642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.248671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.248857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.248887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.249001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.249031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.249149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.249178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.249288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.249317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.249466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.249496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.249610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.249639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.249825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.249855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.250047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.250076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.250176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.250205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.049 [2024-07-15 18:42:32.250395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.049 [2024-07-15 18:42:32.250425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.049 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.250535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.250565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.250684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.250712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.251972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.252022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.252151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.252180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.252394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.252425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.252542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.252572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.252676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.252705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.252827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.252857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.252998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.253028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.253139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.253168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.253296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.253332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.253540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.253571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.253700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.253728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.253845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.253873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.254083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.254112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.254290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.254320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.254539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.254570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.254674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.254703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.254810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.254838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.254955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.254982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.255109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.255138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.255251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.255278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.255466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.255497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.255678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.255708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.255840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.255869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.255979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.256008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.256179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.256208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.256325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.256362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.256472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.256501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.256640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.256670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.256853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.256883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.256995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.257025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.257203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.050 [2024-07-15 18:42:32.257232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.050 qpair failed and we were unable to recover it. 00:27:47.050 [2024-07-15 18:42:32.257357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.257387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.257510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.257539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.257727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.257757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.257883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.257912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.258095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.258124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.258237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.258267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.258388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.258418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.258596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.258625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.258739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.258769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.258893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.258922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.259094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.259123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.259225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.259255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.259371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.259402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.259506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.259534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.259712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.259738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.259837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.259863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.259973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.259999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.260099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.260130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.260226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.260252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.260366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.260394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.260558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.260585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.260784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.260810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.261064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.261092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.261196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.261222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.261313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.261347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.261458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.261485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.261579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.261606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.261711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.261737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.261904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.261931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.262038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.262065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.262241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.262268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.262414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.262443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.262617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.262644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.262818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.262845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.263030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.263056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.263172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.263198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.263360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.263388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.263504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.263531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.263638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.263665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.263771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.263798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.051 [2024-07-15 18:42:32.263970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.051 [2024-07-15 18:42:32.263996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.051 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.264162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.264188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.264284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.264311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.264484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.264512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.264620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.264648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.264759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.264786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.264899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.264925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.265033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.265060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.265242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.265270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.265446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.265474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.265582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.265608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.265772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.265798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.265984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.266010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.266128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.266156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.266265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.266290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.266403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.266431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.266528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.266554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.266663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.266695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.266874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.266901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.267015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.267043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.267137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.267163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.267368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.267396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.267496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.267522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.267706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.267733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.267856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.267883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.268057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.268083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.268243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.268270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.268431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.268459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.268628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.268655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.268760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.268787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.268900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.268928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.269049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.269076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.269185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.269211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.269311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.269344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.269443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.269470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.269559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.269586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.269755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.269781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.269890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.269916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.270082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.270109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.270214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.270240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.270426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.052 [2024-07-15 18:42:32.270454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.052 qpair failed and we were unable to recover it. 00:27:47.052 [2024-07-15 18:42:32.270563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.270590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.270752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.270779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.270883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.270910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.271032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.271058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.271243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.271270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.271402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.271430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.271532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.271559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.271750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.271779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.271950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.271979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.272077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.272106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.272297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.272326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.272455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.272487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.272747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.272777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.272901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.272930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.273051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.273078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.273249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.273276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.273374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.273406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.273515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.273541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.273642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.273668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.273846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.273873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.274036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.274063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.274226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.274252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.274430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.274458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.274565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.274592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.274776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.274803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.274905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.274932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.275096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.275122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.275287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.275314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.275444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.275471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.275571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.275598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.275712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.275739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.275903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.275929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.276023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.276050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.276215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.276242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.276403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.276432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.276543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.276570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.276747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.276774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.276944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.276971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.277078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.277105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.277201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.277228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.053 [2024-07-15 18:42:32.277325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.053 [2024-07-15 18:42:32.277359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.053 qpair failed and we were unable to recover it. 00:27:47.054 [2024-07-15 18:42:32.277469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-07-15 18:42:32.277496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-07-15 18:42:32.277599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-07-15 18:42:32.277626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-07-15 18:42:32.277776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-07-15 18:42:32.277845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-07-15 18:42:32.278039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-07-15 18:42:32.278073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-07-15 18:42:32.278247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-07-15 18:42:32.278278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-07-15 18:42:32.278400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-07-15 18:42:32.278431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-07-15 18:42:32.278557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-07-15 18:42:32.278587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-07-15 18:42:32.278702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-07-15 18:42:32.278731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-07-15 18:42:32.278914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-07-15 18:42:32.278942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-07-15 18:42:32.279053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-07-15 18:42:32.279082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-07-15 18:42:32.279195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-07-15 18:42:32.279223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-07-15 18:42:32.279354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-07-15 18:42:32.279386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-07-15 18:42:32.279504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-07-15 18:42:32.279533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-07-15 18:42:32.279719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-07-15 18:42:32.279747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-07-15 18:42:32.279855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-07-15 18:42:32.279884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-07-15 18:42:32.280009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-07-15 18:42:32.280039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-07-15 18:42:32.280163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-07-15 18:42:32.280192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-07-15 18:42:32.280312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-07-15 18:42:32.280355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-07-15 18:42:32.280488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-07-15 18:42:32.280517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-07-15 18:42:32.280718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-07-15 18:42:32.280747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-07-15 18:42:32.280864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-07-15 18:42:32.280894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-07-15 18:42:32.281017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-07-15 18:42:32.281047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-07-15 18:42:32.281226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-07-15 18:42:32.281254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-07-15 18:42:32.281381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-07-15 18:42:32.281412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-07-15 18:42:32.281593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-07-15 18:42:32.281623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-07-15 18:42:32.281734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-07-15 18:42:32.281764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-07-15 18:42:32.281934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-07-15 18:42:32.281963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-07-15 18:42:32.282079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-07-15 18:42:32.282107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-07-15 18:42:32.282290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-07-15 18:42:32.282319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-07-15 18:42:32.282435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-07-15 18:42:32.282471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-07-15 18:42:32.282676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.054 [2024-07-15 18:42:32.282705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.054 qpair failed and we were unable to recover it. 00:27:47.054 [2024-07-15 18:42:32.282892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.282921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.283094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.283124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.283388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.283418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.283525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.283553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.283669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.283699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.283809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.283839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.284014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.284043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.284171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.284201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.284316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.284354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.284540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.284570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.284682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.284712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.284895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.284924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.285045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.285074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.285195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.285226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.285350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.285379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.285574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.285603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.285705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.285734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.285906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.285934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.286044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.286072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.286257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.286287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.286465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.286494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.286620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.286652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.286756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.286786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.286952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.286980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.287100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.287129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.287317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.287363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.287471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.287499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.287602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.287630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.287799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.287829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.288000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.288028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.288139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.288167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.288366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.288397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.288509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.288538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.288637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.288665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.288928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.288957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.289138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.289167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.289300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.289330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.289451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.289481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.289718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.289748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.289979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.290046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.290233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.055 [2024-07-15 18:42:32.290267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.055 qpair failed and we were unable to recover it. 00:27:47.055 [2024-07-15 18:42:32.290461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.290494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.290699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.290730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.290964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.290994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.291115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.291145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.291404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.291434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.291549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.291578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.291766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.291795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.291968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.291999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.292184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.292213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.292351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.292381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.292498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.292528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.292765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.292803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.292995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.293025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.293202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.293231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.293428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.293459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.293649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.293679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.293795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.293824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.294006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.294035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.294268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.294296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.294495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.294525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.294652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.294681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.294871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.294901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.295095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.295124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.295418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.295448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.295618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.295646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.295758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.295788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.296045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.296073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.296186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.296216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.296409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.296438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.296609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.296638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.296756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.296785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.297047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.297076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.297262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.297291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.297471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.297501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.297675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.297704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.297826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.297856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.298038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.298068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.298351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.056 [2024-07-15 18:42:32.298381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.056 qpair failed and we were unable to recover it. 00:27:47.056 [2024-07-15 18:42:32.298519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.298549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.298718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.298747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.298845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.298875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.299067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.299096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.299276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.299305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.299441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.299473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.299575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.299603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.299710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.299739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.299913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.299942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.300108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.300137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.300393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.300423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.300552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.300581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.300682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.300711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.300946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.300981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.301182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.301211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.301396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.301427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.301550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.301580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.301808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.301837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.301947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.301976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.302093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.302122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.302377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.302407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.302522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.302552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.302796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.302825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.303082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.303111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.303284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.303313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.303554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.303583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.303713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.303741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.303860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.303889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.304069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.304098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.304216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.304245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.304557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.304588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.304796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.304825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.304930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.304959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.305192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.305222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.305322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.305371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.305638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.305667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.305833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.305862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.057 [2024-07-15 18:42:32.305967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.057 [2024-07-15 18:42:32.305996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.057 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.306199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.306229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.306398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.306428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.306544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.306574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.306701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.306730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.306893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.306922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.307115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.307145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.307347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.307377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.307622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.307651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.307824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.307853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.308023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.308053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.308304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.308334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.308468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.308498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.308677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.308706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.308881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.308910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.309032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.309061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.309246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.309281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.309548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.309578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.309785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.309815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.309915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.309943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.310126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.310155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.310350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.310380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.310556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.310585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.310824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.310854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.311062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.311091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.311210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.311240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.311500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.311531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.311801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.311830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.312063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.312093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.312279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.312308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.312517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.312547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.312716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.312745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.312957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.312986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.313160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.313189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.313373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.313403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.313587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.313616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.058 qpair failed and we were unable to recover it. 00:27:47.058 [2024-07-15 18:42:32.313806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.058 [2024-07-15 18:42:32.313835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.314005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-07-15 18:42:32.314034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.314282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-07-15 18:42:32.314311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.314554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-07-15 18:42:32.314585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.314755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-07-15 18:42:32.314784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.315022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-07-15 18:42:32.315051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.315345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-07-15 18:42:32.315376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.315552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-07-15 18:42:32.315582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.315698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-07-15 18:42:32.315728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.315965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-07-15 18:42:32.315994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.316254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-07-15 18:42:32.316283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.316476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-07-15 18:42:32.316506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.316708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-07-15 18:42:32.316737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.316941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-07-15 18:42:32.316970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.317178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-07-15 18:42:32.317208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.317327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-07-15 18:42:32.317366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.317602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-07-15 18:42:32.317632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.317804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-07-15 18:42:32.317834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.317949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-07-15 18:42:32.317979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.318165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-07-15 18:42:32.318195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.318476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-07-15 18:42:32.318506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.318695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-07-15 18:42:32.318725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.318907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-07-15 18:42:32.318936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.319183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-07-15 18:42:32.319213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.319346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-07-15 18:42:32.319377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.319543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-07-15 18:42:32.319572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.319827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-07-15 18:42:32.319856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.320057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-07-15 18:42:32.320086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.320251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-07-15 18:42:32.320281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.320472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-07-15 18:42:32.320502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.320683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-07-15 18:42:32.320712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.320825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-07-15 18:42:32.320854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.321029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.059 [2024-07-15 18:42:32.321058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.059 qpair failed and we were unable to recover it. 00:27:47.059 [2024-07-15 18:42:32.321294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.321324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.321536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.321566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.321832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.321862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.322066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.322095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.322303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.322333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.322585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.322620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.322887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.322917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.323156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.323194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.323391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.323424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.323684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.323713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.323889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.323918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.324098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.324127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.324332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.324373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.324572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.324603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.324722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.324757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.325042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.325072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.325203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.325232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.325354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.325385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.325633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.325663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.325932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.325961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.326198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.326227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.326335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.326374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.326501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.326530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.326630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.326659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.326921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.326950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.327066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.327095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.327229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.327259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.327465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.327496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.327618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.327648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.327829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.327858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.328112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.328141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.328351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.328382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.328507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.328536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.328719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.328749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.328923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.328951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.329134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.329164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.329363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.329393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.329590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.329620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.329734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.329763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.330047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.060 [2024-07-15 18:42:32.330076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.060 qpair failed and we were unable to recover it. 00:27:47.060 [2024-07-15 18:42:32.330187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.330216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.330390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.330420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.330584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.330614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.330801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.330830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.330941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.330969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.331136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.331165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.331367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.331398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.331576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.331606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.331777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.331807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.331971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.332000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.332200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.332229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.332429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.332458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.332722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.332753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.333035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.333065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.333232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.333266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.333439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.333469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.333595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.333625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.333867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.333897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.334146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.334175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.334349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.334378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.334557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.334586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.334767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.334796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.335080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.335109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.335274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.335303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.335436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.335467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.335604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.335633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.335747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.335776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.335878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.335908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.336173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.336202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.336385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.336415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.336592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.336621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.336736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.336765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.336897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.336926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.337178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.337206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.337472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.337501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.337622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.337651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.337818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.061 [2024-07-15 18:42:32.337847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.061 qpair failed and we were unable to recover it. 00:27:47.061 [2024-07-15 18:42:32.338008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.338037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.338318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.338357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.338602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.338631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.338810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.338839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.339014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.339043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.339165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.339195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.339383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.339413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.339588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.339617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.339803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.339833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.339969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.339998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.340174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.340204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.340482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.340512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.340703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.340732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.340897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.340926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.341054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.341084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.341183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.341213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.341411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.341442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.341699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.341733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.341971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.342001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.342242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.342271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.342464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.342495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.342661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.342690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.342806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.342836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.343003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.343033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.343224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.343253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.343443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.343474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.343652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.343682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.343855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.343885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.344067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.344096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.344221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.344251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.344500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.344531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.344776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.344806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.344973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.345002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.345119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.345148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.345330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.345386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.345575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.345605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.345720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.345749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.346011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.346041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.062 qpair failed and we were unable to recover it. 00:27:47.062 [2024-07-15 18:42:32.346167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.062 [2024-07-15 18:42:32.346197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-07-15 18:42:32.346477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-07-15 18:42:32.346506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-07-15 18:42:32.346636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-07-15 18:42:32.346665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-07-15 18:42:32.346834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-07-15 18:42:32.346863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-07-15 18:42:32.347097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-07-15 18:42:32.347127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-07-15 18:42:32.347291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-07-15 18:42:32.347320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-07-15 18:42:32.347568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-07-15 18:42:32.347598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-07-15 18:42:32.347764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-07-15 18:42:32.347793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-07-15 18:42:32.347972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-07-15 18:42:32.348001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-07-15 18:42:32.348183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-07-15 18:42:32.348212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-07-15 18:42:32.348330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-07-15 18:42:32.348366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-07-15 18:42:32.348568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-07-15 18:42:32.348597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-07-15 18:42:32.348717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-07-15 18:42:32.348747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-07-15 18:42:32.348851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-07-15 18:42:32.348880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-07-15 18:42:32.349054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-07-15 18:42:32.349083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-07-15 18:42:32.349253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-07-15 18:42:32.349282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-07-15 18:42:32.349466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-07-15 18:42:32.349495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-07-15 18:42:32.349660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-07-15 18:42:32.349690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-07-15 18:42:32.349868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-07-15 18:42:32.349897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-07-15 18:42:32.350073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-07-15 18:42:32.350107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-07-15 18:42:32.350380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-07-15 18:42:32.350411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-07-15 18:42:32.350617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-07-15 18:42:32.350646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-07-15 18:42:32.350833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-07-15 18:42:32.350862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-07-15 18:42:32.351047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.063 [2024-07-15 18:42:32.351076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.063 qpair failed and we were unable to recover it. 00:27:47.063 [2024-07-15 18:42:32.351333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.351372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.351569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.351598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.351790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.351820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.351990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.352020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.352191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.352221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.352391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.352422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.352627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.352656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.352844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.352872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.353047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.353075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.353264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.353294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.353594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.353625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.353803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.353832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.353964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.353994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.354246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.354276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.354554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.354584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.354866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.354896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.355079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.355108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.355357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.355387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.355691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.355721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.355983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.356013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.356306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.356335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.356571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.356600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.356818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.356849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.357045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.357074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.357355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.357386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.357643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.357672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.357798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.357828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.358006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.358036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.358268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.358298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.358579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.358609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.358907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.358936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.359178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.359207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.359449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.359480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.359613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.359642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.359839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.359868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.360041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.360075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.360251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.360280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.360483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.064 [2024-07-15 18:42:32.360514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.064 qpair failed and we were unable to recover it. 00:27:47.064 [2024-07-15 18:42:32.360773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.360802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.361036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.361065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.361250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.361279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.361560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.361590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.361775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.361804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.362099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.362128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.362311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.362350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.362592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.362622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.362855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.362884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.363141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.363170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.363297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.363327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.363530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.363560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.363740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.363770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.364043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.364072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.364302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.364331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.364545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.364575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.364834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.364863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.365067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.365096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.365301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.365330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.365589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.365619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.365878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.365907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.366098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.366127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.366305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.366333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.366613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.366644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.366847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.366876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.367129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.367158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.367346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.367378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.367628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.367657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.367941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.367970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.368172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.368202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.368386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.368417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.368599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.368628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.368902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.368930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.369118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.369147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.369382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.369412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.369646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.369676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.369940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.065 [2024-07-15 18:42:32.369969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.065 qpair failed and we were unable to recover it. 00:27:47.065 [2024-07-15 18:42:32.370252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.370285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.370565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.370596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.370861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.370890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.371142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.371171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.371415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.371445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.371637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.371667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.371858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.371888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.372100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.372129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.372365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.372396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.372659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.372689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.372874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.372903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.373107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.373137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.373380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.373410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.373615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.373644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.373844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.373873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.374137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.374165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.374433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.374464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.374588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.374616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.374872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.374901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.375157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.375186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.375303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.375332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.375608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.375638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.375871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.375900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.376158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.376187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.376314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.376361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.376501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.376529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.376788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.376818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.377136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.377165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.377379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.377410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.377659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.377689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.377876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.377904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.378138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.378167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.378455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.378484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.378681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.378710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.378917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.378946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.379206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.379235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.066 [2024-07-15 18:42:32.379495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.066 [2024-07-15 18:42:32.379526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.066 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.379721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.379750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.380012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.380042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.380333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.380384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.380650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.380687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.380823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.380859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.381053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.381082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.381354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.381396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.381671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.381700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.381966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.381998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.382261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.382296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.382592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.382625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.382795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.382827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.383112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.383141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.383406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.383441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.383666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.383699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.383969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.384000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.384283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.384316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.384549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.384581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.384844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.384877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.385128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.385160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.385363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.385396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.385653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.385685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.385905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.385935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.386178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.386211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.386478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.386509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.386754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.386786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.387049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.387088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.387332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.387381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.387669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.387702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.387900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.387930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.388217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.388250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.388503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.388538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.388806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.388835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.389041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.389074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.389317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.389378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.389668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.389699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.389984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.390017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.390205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.390234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.390439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.067 [2024-07-15 18:42:32.390473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.067 qpair failed and we were unable to recover it. 00:27:47.067 [2024-07-15 18:42:32.390734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-07-15 18:42:32.390773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-07-15 18:42:32.391040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-07-15 18:42:32.391072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-07-15 18:42:32.391349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-07-15 18:42:32.391382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-07-15 18:42:32.391674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-07-15 18:42:32.391708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-07-15 18:42:32.391964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-07-15 18:42:32.392002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-07-15 18:42:32.392297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-07-15 18:42:32.392330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-07-15 18:42:32.392599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-07-15 18:42:32.392630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-07-15 18:42:32.392921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-07-15 18:42:32.392953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-07-15 18:42:32.393228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-07-15 18:42:32.393267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-07-15 18:42:32.393463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-07-15 18:42:32.393503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-07-15 18:42:32.393712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-07-15 18:42:32.393744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-07-15 18:42:32.394017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-07-15 18:42:32.394050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-07-15 18:42:32.394300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-07-15 18:42:32.394333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-07-15 18:42:32.394643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-07-15 18:42:32.394675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-07-15 18:42:32.394868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-07-15 18:42:32.394900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-07-15 18:42:32.395166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-07-15 18:42:32.395200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-07-15 18:42:32.395407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-07-15 18:42:32.395441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-07-15 18:42:32.395574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-07-15 18:42:32.395604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-07-15 18:42:32.395875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-07-15 18:42:32.395908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-07-15 18:42:32.396130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-07-15 18:42:32.396166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-07-15 18:42:32.396359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-07-15 18:42:32.396391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-07-15 18:42:32.396560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-07-15 18:42:32.396599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-07-15 18:42:32.396846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-07-15 18:42:32.396878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-07-15 18:42:32.397183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-07-15 18:42:32.397222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-07-15 18:42:32.397456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-07-15 18:42:32.397490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-07-15 18:42:32.397804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-07-15 18:42:32.397842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-07-15 18:42:32.398094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-07-15 18:42:32.398126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-07-15 18:42:32.398432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-07-15 18:42:32.398475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-07-15 18:42:32.398730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-07-15 18:42:32.398767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-07-15 18:42:32.398915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-07-15 18:42:32.398947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-07-15 18:42:32.399212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.068 [2024-07-15 18:42:32.399244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.068 qpair failed and we were unable to recover it. 00:27:47.068 [2024-07-15 18:42:32.399537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.399572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.399836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.399869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.400116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.400146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.400412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.400445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.400619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.400657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.400846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.400878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.401070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.401103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.401285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.401318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.401542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.401581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.401848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.401886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.402140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.402178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.402455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.402494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.402693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.402733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.402915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.402953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.403201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.403234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.403451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.403485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.403745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.403779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.403903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.403941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.404182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.404221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.404508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.404542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.404807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.404840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.405112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.405145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.405405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.405439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.405734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.405770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.405967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.406000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.406217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.406249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.406465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.406504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.406774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.406808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.406994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.407031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.407277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.407316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.407548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.407600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.407802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.407840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.408028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.408066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.408334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.408384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.408665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.408700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.408961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.408994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.409288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.409320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.409532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.409566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.409832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.409865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.410157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.069 [2024-07-15 18:42:32.410190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.069 qpair failed and we were unable to recover it. 00:27:47.069 [2024-07-15 18:42:32.410394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.410430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.410627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.410660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.410904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.410936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.411139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.411172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.411364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.411406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.411702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.411731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.411916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.411948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.412247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.412277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.412544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.412579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.412863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.412896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.413170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.413210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.413488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.413524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.413737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.413776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.414027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.414066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.414332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.414389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.414587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.414620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.414874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.414906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.415201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.415235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.415524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.415559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.415835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.415872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.416018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.416050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.416318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.416364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.416642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.416676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.416979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.417013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.417269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.417305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.417566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.417600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.417895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.417935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.418148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.418181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.418424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.418459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.418773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.418807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.419082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.419120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.419320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.419371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.419653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.419686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.419973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.420014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.420283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.420318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.420529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.420569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.420793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.070 [2024-07-15 18:42:32.420834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 18:42:32.421091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.421124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.421331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.421412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.421676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.421713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.421991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.422060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.422367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.422403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.422632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.422663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.422847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.422878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.423118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.423147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.423430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.423462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.423745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.423775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.424057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.424087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.424296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.424325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.424547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.424577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.424857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.424887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.425028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.425058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.425246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.425275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.425538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.425576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.425868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.425898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.426086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.426116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.426296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.426325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.426602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.426632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.426918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.426948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.427079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.427108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.427393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.427425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.427623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.427653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.427936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.427965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.428146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.428175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.428357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.428388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.428624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.428653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.428940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.428969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.429158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.429188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.429435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.429465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.429748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.429777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.429990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.430019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.430283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.430313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.430615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.430646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.430911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.430940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.431179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.431208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.431415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.431446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.431638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.431667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.071 qpair failed and we were unable to recover it. 00:27:47.071 [2024-07-15 18:42:32.431924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.071 [2024-07-15 18:42:32.431953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.432216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.432245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.432510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.432540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.432841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.432881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.433120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.433151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.433422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.433457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.433733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.433766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.434017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.434054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.434228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.434257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.434438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.434472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.434672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.434705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.434899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.434928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.435124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.435157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.435369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.435400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.435695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.435728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.435998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.436030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.436159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.436194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.436381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.436412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.436700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.436732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.436908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.436945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.437217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.437247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.437462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.437502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.437696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.437726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.437978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.438010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.438253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.438282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.438601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.438637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.438845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.438885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.439149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.439178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.439367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.439410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.439669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.439700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.439985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.440017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.440299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.440348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.440544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.440575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.440822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.440855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.441058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.441087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.072 qpair failed and we were unable to recover it. 00:27:47.072 [2024-07-15 18:42:32.441278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.072 [2024-07-15 18:42:32.441315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.441573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.441604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.441887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.441919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.442141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.442174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.442372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.442407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.442587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.442617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.442859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.442891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.443093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.443130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.443317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.443364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.443636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.443670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.443855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.443888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.444115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.444151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.444362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.444397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.444587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.444616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.444854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.444887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.445028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.445059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.445321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.445374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.445660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.445695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.445997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.446036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.446284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.446314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.446524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.446558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.446823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.446862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.447125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.447164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.447452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.447491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.447633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.447664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.447857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.447893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.448110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.448142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.448387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.448421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.448686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.448719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.448987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.449016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.449207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.449240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.449458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.449490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.449682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.449722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.449918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.449951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.450214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.450247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.450534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.450569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.073 qpair failed and we were unable to recover it. 00:27:47.073 [2024-07-15 18:42:32.450769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.073 [2024-07-15 18:42:32.450804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.451056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.451089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.451276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.451309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.451615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.451653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.451930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.451970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.452189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.452219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.452467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.452502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.452789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.452822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.453131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.453164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.453435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.453469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.453607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.453647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.453828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.453857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.454051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.454087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.454331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.454394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.454677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.454709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.454911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.454943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.455147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.455180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.455366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.455400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.455649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.455685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.455933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.455964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.456179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.456211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.456473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.456512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.456804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.456838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.457101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.457134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.457378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.457413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.457587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.457619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.457923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.457956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.458143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.458173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.458372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.458405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.458538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.458571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.458810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.458840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.459107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.459141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.459358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.459401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.459682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.459717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.459976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.460011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.460218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.460249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.460466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.460500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.460764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.074 [2024-07-15 18:42:32.460797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.074 qpair failed and we were unable to recover it. 00:27:47.074 [2024-07-15 18:42:32.461082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.461114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.461366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.461398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.461695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.461725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.461910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.461940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.462183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.462213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.462498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.462530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.462704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.462733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.462997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.463026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.463288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.463319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.463536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.463570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.463858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.463887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.464101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.464131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.464375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.464407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.464647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.464677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.464937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.464974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.465220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.465250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.465539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.465572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.465841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.465871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.466081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.466111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.466297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.466326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.466607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.466639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.466828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.466858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.467097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.467126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.467365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.467398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.467643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.467673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.467958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.467988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.468260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.468290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.468504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.468536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.468684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.468714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.468980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.469010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.469183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.469213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.469459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.469491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.469780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.469809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.470079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.470109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.470405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.470438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.470707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.470737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.471034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.075 [2024-07-15 18:42:32.471064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.075 qpair failed and we were unable to recover it. 00:27:47.075 [2024-07-15 18:42:32.471332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.471377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.471649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.471679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.471960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.471989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.472118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.472148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.472420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.472452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.472745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.472775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.472992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.473022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.473211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.473240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.473430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.473462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.473639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.473669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.473867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.473897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.474113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.474142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.474413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.474445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.474628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.474658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.474868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.474897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.475158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.475188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.475382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.475414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.475679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.475719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.475837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.475866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.476036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.476065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.476237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.476266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.476504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.476537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.476793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.476822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.477118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.477147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.477425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.477457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.477743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.477773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.478055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.478085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.478325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.478371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.478614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.478644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.478871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.478900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.479161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.479190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.479485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.479518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.479788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.479818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.480117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.480146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.480279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.480308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.480534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.480565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.480814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.076 [2024-07-15 18:42:32.480843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.076 qpair failed and we were unable to recover it. 00:27:47.076 [2024-07-15 18:42:32.481031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-07-15 18:42:32.481060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-07-15 18:42:32.481394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-07-15 18:42:32.481426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-07-15 18:42:32.481652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-07-15 18:42:32.481682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-07-15 18:42:32.481870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-07-15 18:42:32.481900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-07-15 18:42:32.482156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-07-15 18:42:32.482186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-07-15 18:42:32.482393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-07-15 18:42:32.482426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-07-15 18:42:32.482686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-07-15 18:42:32.482716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-07-15 18:42:32.482851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-07-15 18:42:32.482881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-07-15 18:42:32.483077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-07-15 18:42:32.483106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-07-15 18:42:32.483319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-07-15 18:42:32.483362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-07-15 18:42:32.483657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-07-15 18:42:32.483687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-07-15 18:42:32.483902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-07-15 18:42:32.483933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-07-15 18:42:32.484067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-07-15 18:42:32.484097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-07-15 18:42:32.484361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-07-15 18:42:32.484393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-07-15 18:42:32.484522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-07-15 18:42:32.484552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-07-15 18:42:32.484735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-07-15 18:42:32.484764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-07-15 18:42:32.485044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-07-15 18:42:32.485073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-07-15 18:42:32.485357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-07-15 18:42:32.485390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-07-15 18:42:32.485646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-07-15 18:42:32.485677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-07-15 18:42:32.485971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-07-15 18:42:32.486001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-07-15 18:42:32.486195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-07-15 18:42:32.486230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-07-15 18:42:32.486427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-07-15 18:42:32.486458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-07-15 18:42:32.486643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-07-15 18:42:32.486673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-07-15 18:42:32.486861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-07-15 18:42:32.486890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-07-15 18:42:32.487182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-07-15 18:42:32.487212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-07-15 18:42:32.487401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-07-15 18:42:32.487432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-07-15 18:42:32.487656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-07-15 18:42:32.487685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-07-15 18:42:32.487953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-07-15 18:42:32.487984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-07-15 18:42:32.488173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-07-15 18:42:32.488203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-07-15 18:42:32.488470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-07-15 18:42:32.488501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-07-15 18:42:32.488715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-07-15 18:42:32.488745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.077 [2024-07-15 18:42:32.488933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.077 [2024-07-15 18:42:32.488963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.077 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.489226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.489256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.489504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.489534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.489785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.489815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.490018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.490047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.490237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.490266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.490534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.490564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.490779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.490808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.491049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.491079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.491269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.491298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.491525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.491556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.491825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.491854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.492131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.492160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.492457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.492489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.492681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.492710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.492956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.492985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.493183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.493214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.493487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.493517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.493804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.493834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.494120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.494149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.494349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.494379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.494601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.494632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.494808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.494837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.495042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.495072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.495262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.495292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.495607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.495637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.495917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.495947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.496135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.496165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.496359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.496391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.496661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.496696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.496887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.496917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.497108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.497137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.497407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.497438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.497629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.497659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.497928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.497958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.498255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.498285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.498566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.498597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.498795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.078 [2024-07-15 18:42:32.498825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.078 qpair failed and we were unable to recover it. 00:27:47.078 [2024-07-15 18:42:32.499084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.499114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.499308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.499345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.499535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.499565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.499761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.499790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.499997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.500027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.500242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.500272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.500542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.500574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.500820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.500849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.501062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.501091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.501282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.501312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.501528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.501559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.501829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.501859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.502118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.502147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.502403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.502434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.502648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.502678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.502922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.502951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.503140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.503169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.503373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.503405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.503531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.503561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.503742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.503772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.503996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.504025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.504333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.504373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.504627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.504657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.504862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.504892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.505113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.505143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.505316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.505355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.505532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.505562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.505811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.505841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.506039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.506069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.506333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.506375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.506659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.506689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.506930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.506965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.507172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.507202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.507385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.507416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.507691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.507720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.507911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.507941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.508214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.508244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.508438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.508469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.508731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.079 [2024-07-15 18:42:32.508761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.079 qpair failed and we were unable to recover it. 00:27:47.079 [2024-07-15 18:42:32.508953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.508982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.509228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.509258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.509461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.509492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.509742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.509771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.510045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.510075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.510321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.510359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.510625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.510655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.510867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.510897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.511017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.511048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.511227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.511257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.511483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.511515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.511697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.511727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.511995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.512026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.512328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.512369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.512652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.512683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.512986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.513017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.513284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.513315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.513469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.513500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.513797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.513828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.514124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.514155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.514403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.514434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.514682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.514712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.514913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.514944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.515256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.515286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.515582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.515613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.515804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.515834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.516056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.516086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.516273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.516303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.516565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.516596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.516887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.516919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.517193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.517223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.517478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.517510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.517804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.517839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.518135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.518165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.518388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.518419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.518611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.518640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.518900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.518930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.519181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.519211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.519392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.519422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.080 [2024-07-15 18:42:32.519618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.080 [2024-07-15 18:42:32.519648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.080 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.519925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.519954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.520218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.520248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.520514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.520545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.520817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.520846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.521139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.521169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.521361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.521393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.521692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.521723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.521915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.521944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.522142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.522172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.522453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.522484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.522705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.522734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.523010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.523040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.523345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.523375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.523568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.523598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.523855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.523885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.524135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.524165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.524360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.524392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.524670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.524700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.524980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.525010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.525301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.525332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.525621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.525651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.525861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.525892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.526091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.526122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.526390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.526422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.526629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.526660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.526917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.526947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.527203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.527233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.527508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.527540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.527814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.527844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.528141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.528172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.528305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.528334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.528546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.528576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.528696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.528737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.528944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.528973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.529236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.529266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.529564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.529595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.529870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.529901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.530105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.530135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.081 qpair failed and we were unable to recover it. 00:27:47.081 [2024-07-15 18:42:32.530396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.081 [2024-07-15 18:42:32.530428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-07-15 18:42:32.530731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-07-15 18:42:32.530762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-07-15 18:42:32.530963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-07-15 18:42:32.530993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-07-15 18:42:32.531130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-07-15 18:42:32.531161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-07-15 18:42:32.531363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-07-15 18:42:32.531395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-07-15 18:42:32.531643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-07-15 18:42:32.531673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-07-15 18:42:32.531873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-07-15 18:42:32.531904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-07-15 18:42:32.532178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-07-15 18:42:32.532208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-07-15 18:42:32.532407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-07-15 18:42:32.532439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-07-15 18:42:32.532632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-07-15 18:42:32.532662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-07-15 18:42:32.532940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-07-15 18:42:32.532970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-07-15 18:42:32.533263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-07-15 18:42:32.533293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-07-15 18:42:32.533573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-07-15 18:42:32.533604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-07-15 18:42:32.533784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-07-15 18:42:32.533814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-07-15 18:42:32.534007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-07-15 18:42:32.534038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-07-15 18:42:32.534288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-07-15 18:42:32.534317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-07-15 18:42:32.534606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-07-15 18:42:32.534638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-07-15 18:42:32.534914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-07-15 18:42:32.534944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-07-15 18:42:32.535159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-07-15 18:42:32.535188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-07-15 18:42:32.535440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-07-15 18:42:32.535472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-07-15 18:42:32.535724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-07-15 18:42:32.535753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-07-15 18:42:32.535937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-07-15 18:42:32.535968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-07-15 18:42:32.536167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-07-15 18:42:32.536198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-07-15 18:42:32.536448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-07-15 18:42:32.536479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-07-15 18:42:32.536731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-07-15 18:42:32.536762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-07-15 18:42:32.537015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-07-15 18:42:32.537045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-07-15 18:42:32.537356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-07-15 18:42:32.537388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-07-15 18:42:32.537651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-07-15 18:42:32.537681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-07-15 18:42:32.537958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-07-15 18:42:32.537989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-07-15 18:42:32.538164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-07-15 18:42:32.538194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-07-15 18:42:32.538387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-07-15 18:42:32.538418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-07-15 18:42:32.538669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-07-15 18:42:32.538699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.082 qpair failed and we were unable to recover it. 00:27:47.082 [2024-07-15 18:42:32.539000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.082 [2024-07-15 18:42:32.539030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.539255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.539287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.539549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.539586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.539802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.539832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.540057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.540087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.540367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.540398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.540688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.540718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.540994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.541023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.541260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.541289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.541570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.541601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.541723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.541753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.542021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.542051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.542300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.542331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.542613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.542644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.542833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.542863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.543128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.543157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.543359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.543391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.543656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.543686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.543980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.544011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.544286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.544316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.544506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.544537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.544762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.544792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.545060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.545090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.545280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.545311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.545458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.545490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.545741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.545771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.546076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.546107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.546394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.546425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.546625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.546656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.546914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.546945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.547223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.547254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.547541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.547573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.547850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.547880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.548166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.548196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.548419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.548450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.548594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.548624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.548801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.548832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.549081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.083 [2024-07-15 18:42:32.549111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.083 qpair failed and we were unable to recover it. 00:27:47.083 [2024-07-15 18:42:32.549286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.549316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.549627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.549658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.549926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.549956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.550256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.550286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.550478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.550514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.550712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.550743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.550993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.551022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.551219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.551249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.551430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.551461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.551738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.551768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.551968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.551997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.552261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.552291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.552568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.552599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.552791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.552821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.553020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.553050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.553317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.553355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.553556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.553586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.553836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.553867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.554148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.554178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.554382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.554413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.554711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.554741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.555018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.555048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.555264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.555295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.555584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.555617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.555897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.555926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.556150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.556179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.556455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.556486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.556780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.556811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.557093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.557122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.557335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.557375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.557566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.557596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.557877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.557907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.558169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.558200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.558452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.558484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.558721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.084 [2024-07-15 18:42:32.558752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.084 qpair failed and we were unable to recover it. 00:27:47.084 [2024-07-15 18:42:32.558973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.559004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.559275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.559304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.559604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.559635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.559913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.559944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.560160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.560190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.560438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.560470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.560649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.560680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.560857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.560887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.561027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.561058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.561247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.561283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.561607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.561637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.561909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.561939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.562120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.562149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.562413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.562444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.562742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.562772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.563064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.563093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.563369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.563400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.563694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.563724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.564011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.564040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.564312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.564352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.564477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.564506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.564730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.564759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.564950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.564979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.565248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.565279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.565581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.565612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.565897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.565927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.566167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.566197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.566445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.566476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.566724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.566755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.567033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.567062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.567355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.567387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.567637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.567668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.567988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.568017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.568271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.568301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.568571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.568602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.568899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.568929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.569207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.569237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.569416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.569447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.569638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.085 [2024-07-15 18:42:32.569668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.085 qpair failed and we were unable to recover it. 00:27:47.085 [2024-07-15 18:42:32.569953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.569983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.570265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.570295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.570581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.570611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.570893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.570922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.571209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.571239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.571450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.571481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.571734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.571763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.572029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.572058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.572358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.572389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.572662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.572692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.572966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.573001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.573221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.573250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.573524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.573554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.573835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.573864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.574153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.574183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.574322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.574361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.574631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.574661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.574948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.574977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.575213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.575242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.575442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.575483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.575764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.575797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.576070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.576100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.576367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.576399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.576710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.576746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.576955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.576985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.577171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.577201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.577394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.577426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.577706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.577744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.577971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.578001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.578142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.578172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.578366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.578397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.578671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.578702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.578987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.579019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.579230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.086 [2024-07-15 18:42:32.579259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.086 qpair failed and we were unable to recover it. 00:27:47.086 [2024-07-15 18:42:32.579405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.087 [2024-07-15 18:42:32.579437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.087 qpair failed and we were unable to recover it. 00:27:47.087 [2024-07-15 18:42:32.579731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.087 [2024-07-15 18:42:32.579771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.087 qpair failed and we were unable to recover it. 00:27:47.087 [2024-07-15 18:42:32.579976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.087 [2024-07-15 18:42:32.580009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.087 qpair failed and we were unable to recover it. 00:27:47.087 [2024-07-15 18:42:32.580234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.087 [2024-07-15 18:42:32.580265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.087 qpair failed and we were unable to recover it. 00:27:47.087 [2024-07-15 18:42:32.580459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.087 [2024-07-15 18:42:32.580490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.087 qpair failed and we were unable to recover it. 00:27:47.362 [2024-07-15 18:42:32.580704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-07-15 18:42:32.580734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-07-15 18:42:32.580956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-07-15 18:42:32.580986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-07-15 18:42:32.581130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-07-15 18:42:32.581159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-07-15 18:42:32.581422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-07-15 18:42:32.581454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-07-15 18:42:32.581641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-07-15 18:42:32.581671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-07-15 18:42:32.581949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-07-15 18:42:32.581979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-07-15 18:42:32.582256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-07-15 18:42:32.582287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-07-15 18:42:32.582493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-07-15 18:42:32.582526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-07-15 18:42:32.582792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-07-15 18:42:32.582823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-07-15 18:42:32.583025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-07-15 18:42:32.583061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-07-15 18:42:32.583318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.362 [2024-07-15 18:42:32.583364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.362 qpair failed and we were unable to recover it. 00:27:47.362 [2024-07-15 18:42:32.583562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.583606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.583818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.583849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.584030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.584061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.584241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.584274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.584569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.584604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.584867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.584901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.585058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.585089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.585375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.585410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.585551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.585582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.585882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.585916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.586184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.586217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.586442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.586476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.586781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.586814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.587045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.587076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.587334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.587380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.587649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.587681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.587940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.587970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.588164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.588194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.588373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.588405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.588621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.588650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.588922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.588952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.589255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.589284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.589479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.589511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.589768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.589797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.590076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.590106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.590333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.590377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.590590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.590620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.590798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.590834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.591083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.591113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.591235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.591266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.591473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.591504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.591754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.591783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.591962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.591991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.592241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.592270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.592461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.592495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.592701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.592731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.592924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.592953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.593234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.593263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.593548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.593579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.593841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.593871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.363 [2024-07-15 18:42:32.594140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.363 [2024-07-15 18:42:32.594169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.363 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.594391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.594423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.594564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.594594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.594882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.594912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.595179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.595209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.595482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.595513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.595804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.595834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.596107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.596137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.596334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.596374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.596572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.596602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.596847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.596877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.597099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.597129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.597395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.597426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.597725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.597756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.597982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.598012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.598220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.598250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.598443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.598475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.598746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.598777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.599041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.599072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.599323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.599365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.599662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.599692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.599938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.599968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.600243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.600273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.600520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.600552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.600840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.600870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.601096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.601127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.601399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.601431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.601694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.601730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.601978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.602009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.602258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.602288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.602576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.602608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.602811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.602842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.603082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.603112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.603391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.603422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.603672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.603703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.603962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.603992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.604172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.604202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.604481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.604512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.604785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.604815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.605038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.605068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.364 qpair failed and we were unable to recover it. 00:27:47.364 [2024-07-15 18:42:32.605347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.364 [2024-07-15 18:42:32.605378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.605582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.605612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.605803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.605833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.606023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.606053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.606253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.606283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.606556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.606587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.606844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.606873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.607174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.607204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.607480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.607511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.607717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.607747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.608011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.608041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.608257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.608287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.608573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.608604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.608857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.608887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.609026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.609056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.609258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.609289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.609586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.609618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.609907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.609936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.610217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.610248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.610535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.610567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.610772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.610802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.611071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.611101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.611356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.611388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.611583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.611613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.611886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.611916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.612205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.612235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.612519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.612551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.612752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.612787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.613063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.613093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.613359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.613390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.613584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.613614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.613814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.613844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.614092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.614122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.614397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.614429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.614725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.614756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.615003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.615033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.615230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.615260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.615529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.615561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.615850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.615880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.616149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.365 [2024-07-15 18:42:32.616179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.365 qpair failed and we were unable to recover it. 00:27:47.365 [2024-07-15 18:42:32.616479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.616511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.616727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.616756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.617005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.617035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.617153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.617183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.617390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.617421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.617622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.617652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.617928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.617958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.618173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.618203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.618403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.618434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.618651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.618681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.618901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.618930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.619178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.619208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.619423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.619453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.619722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.619751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.620047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.620077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.620295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.620324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.620608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.620638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.620855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.620885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.621183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.621212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.621462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.621493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.621733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.621764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.621944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.621974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.622224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.622253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.622555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.622586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.622857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.622887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.623181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.623210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.623399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.623430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.623681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.623716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.624016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.624047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.624243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.624272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.624491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.624522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.624789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.624819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.625010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.625040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.625312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.625351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.625585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.625615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.625860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.625889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.626093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.626123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.366 [2024-07-15 18:42:32.626312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.366 [2024-07-15 18:42:32.626351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.366 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.626496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.626526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.626668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.626699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.626951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.626979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.627177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.627207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.627426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.627457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.627658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.627687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.627888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.627918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.628165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.628194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.628373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.628404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.628603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.628633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.628823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.628853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.629050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.629079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.629280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.629310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.629597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.629628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.629807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.629836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.630053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.630081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.630362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.630404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.630694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.630726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.630994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.631029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.631325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.631375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.631578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.631612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.631888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.631919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.632099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.632132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.632240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.632271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.632550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.632591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.632788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.632822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.633099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.633133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.633451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.633485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.633798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.633835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.634040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.634076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.634210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.634251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.634445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.634478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.634729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.634762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.635031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.635070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.635291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.367 [2024-07-15 18:42:32.635321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.367 qpair failed and we were unable to recover it. 00:27:47.367 [2024-07-15 18:42:32.635622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.635661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.635925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.635956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.636181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.636214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.636466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.636501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.636787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.636820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.637054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.637084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.637378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.637414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.637664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.637695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.637956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.637990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.638195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.638225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.638479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.638519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.638774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.638805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.639103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.639136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.639268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.639299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.639598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.639633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.639831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.639862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.640082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.640115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.640368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.640400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.640579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.640613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.640871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.640900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.641149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.641182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.641406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.641439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.641715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.641752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.641991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.642022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.642321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.642369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.642563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.642593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.642727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.642767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.643066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.643100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.643315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.643362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.643592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.643623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.643910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.643943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.644145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.644176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.644444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.644480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.644692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.644730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.644934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.644973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.645167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.645201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.645416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.645452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.645730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.645763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.645959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.646003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.646203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.368 [2024-07-15 18:42:32.646235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.368 qpair failed and we were unable to recover it. 00:27:47.368 [2024-07-15 18:42:32.646434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.646476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.646674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.646704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.646973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.647006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.647298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.647332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.647566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.647602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.647871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.647902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.648105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.648140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.648324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.648372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.648598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.648631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.648885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.648916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.649118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.649160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.649440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.649472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.649753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.649786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.649979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.650008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.650269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.650302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.650594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.650628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.650836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.650867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.651149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.651182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.651456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.651490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.651777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.651810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.652012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.652043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.652320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.652388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.652652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.652689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.652973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.653004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.653287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.653321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.653538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.653570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.653849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.653883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.654079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.654109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.654385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.654420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.654676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.654710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.655019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.655058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.655267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.655299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.655611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.655647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.655863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.655894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.656094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.656131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.656259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.656295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.656535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.656570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.656766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.656796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.656985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.657018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.369 [2024-07-15 18:42:32.657270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.369 [2024-07-15 18:42:32.657300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.369 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.657565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.657599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.657854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.657885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.658084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.658116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.658369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.658402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.658677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.658711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.658992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.659028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.659311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.659356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.659553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.659589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.659719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.659750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.660032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.660072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.660264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.660294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.660629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.660665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.660944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.660974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.661263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.661295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.661502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.661542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.661830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.661863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.662064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.662104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.662369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.662402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.662683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.662717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.662967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.662996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.663197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.663230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.663519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.663558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.663749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.663781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.664073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.664104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.664332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.664398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.664669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.664701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.664965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.664998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.665256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.665289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.665571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.665603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.665889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.665922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.666214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.666252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.666524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.666558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.666765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.666799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.667048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.667078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.667381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.667421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.667686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.667724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.668008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.668039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.668216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.668248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.370 [2024-07-15 18:42:32.668517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.370 [2024-07-15 18:42:32.668551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.370 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.668829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.668862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.669134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.669166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.669438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.669478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.669677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.669707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.669914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.669946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.670147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.670179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.670430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.670463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.670765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.670798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.671059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.671098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.671391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.671426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.671695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.671728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.671961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.671992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.672185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.672224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.672424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.672457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.672729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.672764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.672947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.672977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.673254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.673296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.673609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.673642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.673946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.673979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.674174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.674216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.674479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.674514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.674801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.674834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.675057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.675089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.675361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.675396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.675526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.675556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.675737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.675774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.675983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.676014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.676297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.676331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.676654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.676685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.676992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.677024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.677291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.677324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.677592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.677624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.677919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.677952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.678173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.371 [2024-07-15 18:42:32.678207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.371 qpair failed and we were unable to recover it. 00:27:47.371 [2024-07-15 18:42:32.678490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.678524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.678819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.678859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.679061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.679091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.679272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.679305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.679580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.679612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.679870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.679903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.680127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.680167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.680472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.680506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.680781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.680814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.681107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.681146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.681356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.681392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.681576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.681607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.681880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.681912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.682172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.682204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.682485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.682518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.682803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.682836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.683043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.683073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.683275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.683308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.683523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.683556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.683829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.683862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.684013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.684043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.684233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.684268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.684548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.684580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.684843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.684875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.685057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.685092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.685290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.685320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.685621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.685653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.685845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.685878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.686138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.686169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.686423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.686457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.686762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.686795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.687022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.687054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.687353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.687388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.687625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.687659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.687919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.687952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.688249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.688282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.688585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.688620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.372 qpair failed and we were unable to recover it. 00:27:47.372 [2024-07-15 18:42:32.688890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.372 [2024-07-15 18:42:32.688924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.689056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.689085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.689287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.689319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.689611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.689645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.689849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.689895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.690176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.690213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.690508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.690550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.690817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.690847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.691137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.691171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.691382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.691415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.691667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.691701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.691899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.691938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.692151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.692182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.692437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.692472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.692610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.692641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.692841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.692873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.693104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.693137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.693400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.693431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.693721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.693754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.694039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.694076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.694333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.694384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.694664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.694697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.694935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.694969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.695172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.695205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.695403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.695439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.695712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.695743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.696021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.696054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.696318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.696389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.696577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.696610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.696757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.696791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.696913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.696944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.697243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.697276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.697427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.697461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.697775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.697807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.698087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.698120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.698408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.698443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.698661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.698694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.698913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.698943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.699226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.699258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.699474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.699509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.373 [2024-07-15 18:42:32.699744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.373 [2024-07-15 18:42:32.699777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.373 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.700056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.700089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.700291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.700322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.700597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.700631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.700830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.700868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.701133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.701167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.701462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.701497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.701766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.701798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.702089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.702119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.702400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.702434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.702653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.702693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.702894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.702928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.703056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.703094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.703395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.703434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.703710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.703744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.704029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.704062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.704334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.704399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.704664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.704697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.705015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.705050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.705199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.705232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.705379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.705411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.705595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.705627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.705910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.705943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.706075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.706105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.706390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.706425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.706618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.706648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.706924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.706956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.707153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.707183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.707470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.707504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.707789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.707828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.708128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.708158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.708311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.708365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.708652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.708682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.708967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.709001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.709289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.709324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.709619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.709649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.709886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.709920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.710174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.710210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.710492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.710526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.710759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.710792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.711006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.374 [2024-07-15 18:42:32.711035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.374 qpair failed and we were unable to recover it. 00:27:47.374 [2024-07-15 18:42:32.711333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.711387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.711663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.711693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.711898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.711932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.712207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.712255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.712387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.712420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.712718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.712752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.713019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.713052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.713303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.713336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.713629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.713662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.713881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.713914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.714146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.714184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.714377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.714409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.714667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.714700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.714916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.714949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.715163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.715193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.715451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.715485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.715762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.715795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.716028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.716059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.716365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.716399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.716664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.716698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.716956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.716987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.717215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.717248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.717378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.717410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.717689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.717723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.717873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.717903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.718122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.718155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.718364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.718399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.718663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.718696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.718979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.719012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.719297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.719329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.719641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.719678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.719948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.719978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.720158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.720187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.720468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.720501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.720704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.720734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.375 [2024-07-15 18:42:32.721027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.375 [2024-07-15 18:42:32.721056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.375 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.721245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.721275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.721530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.721563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.721745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.721775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.722072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.722102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.722294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.722323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.722588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.722619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.722812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.722841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.723115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.723152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.723366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.723398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.723648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.723678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.723792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.723821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.724031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.724061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.724384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.724417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.724743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.724773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.725023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.725052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.725233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.725262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.725561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.725593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.725854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.725884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.726088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.726117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.726391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.726423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.726562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.726592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.726814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.726844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.727040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.727070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.727273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.727303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.727578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.727610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.727876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.727906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.728180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.728209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.728421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.728453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.728575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.728604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.728782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.728811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.729083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.729112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.729389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.729420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.729651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.729681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.729961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.729990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.730276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.730310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.730534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.730565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.730846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.730876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.731056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.731086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.731358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.731389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.731637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.376 [2024-07-15 18:42:32.731667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.376 qpair failed and we were unable to recover it. 00:27:47.376 [2024-07-15 18:42:32.731935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.731965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.732216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.732245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.732515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.732549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.732830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.732860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.733146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.733176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.733456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.733488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.733683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.733713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.733976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.734005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.734264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.734294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.734605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.734637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.734836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.734866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.735147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.735176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.735380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.735413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.735691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.735721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.735861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.735891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.736150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.736179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.736390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.736422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.736721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.736751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.737042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.737071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.737362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.737394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.737648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.737678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.737979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.738009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.738280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.738309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.738572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.738605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.738903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.738932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.739208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.739238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.739533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.739565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.739860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.739890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.740135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.740165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.740434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.740467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.740650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.740679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.740954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.740983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.741255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.741284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.741563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.741594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.741888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.741923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.742217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.742247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.742444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.742475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.742671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.742702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.742895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.742926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.743214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.743244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.377 [2024-07-15 18:42:32.743522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.377 [2024-07-15 18:42:32.743554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.377 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.743749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.743779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.744036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.744066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.744196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.744226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.744452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.744483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.744678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.744708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.744971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.745001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.745197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.745228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.745429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.745461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.745637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.745667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.745883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.745912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.746161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.746191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.746397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.746429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.746609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.746639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.746837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.746867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.747147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.747177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.747362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.747393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.747570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.747600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.747893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.747923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.748119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.748149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.748422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.748454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.748744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.748774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.748976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.749005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.749131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.749160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.749346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.749378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.749669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.749699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.749958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.749987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.750253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.750283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.750559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.750589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.750841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.750870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.751057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.751086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.751281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.751311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.751515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.751546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.751682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.751711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.751984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.752019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.752200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.752230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.752507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.752538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.752730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.752759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.753024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.753054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.753228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.753258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.753538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.378 [2024-07-15 18:42:32.753568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.378 qpair failed and we were unable to recover it. 00:27:47.378 [2024-07-15 18:42:32.753815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.753844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.754111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.754141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.754438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.754468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.754744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.754774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.754953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.754982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.755255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.755285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.755429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.755460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.755736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.755766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.755955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.755986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.756199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.756228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.756495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.756526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.756776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.756806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.757023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.757053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.757319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.757360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.757648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.757679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.757952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.757981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.758231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.758260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.758508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.758539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.758811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.758841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.759116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.759145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.759419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.759450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.759748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.759778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.760054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.760083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.760381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.760412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.760635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.760664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.760867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.760896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.761110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.761139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.761423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.761454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.761744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.761773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.762028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.762058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.762273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.762303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.762579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.762610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.762910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.762940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.763213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.763249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.763546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.763577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.763771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.763801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.764050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.764080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.764359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.764390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.764664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.764694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.764958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.764988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.379 qpair failed and we were unable to recover it. 00:27:47.379 [2024-07-15 18:42:32.765239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.379 [2024-07-15 18:42:32.765269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.765549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.765580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.765826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.765856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.766123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.766152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.766403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.766434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.766741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.766771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.767038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.767068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.767373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.767404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.767674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.767704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.767970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.768000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.768251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.768281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.768598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.768630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.768885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.768915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.769117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.769147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.769422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.769454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.769649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.769679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.769870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.769900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.770076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.770106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.770322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.770362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.770560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.770589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.770889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.770919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.771197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.771227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.771466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.771497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.771768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.771797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.772097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.772127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.772373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.772404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.772603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.772633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.772910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.772939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.773239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.773268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.773546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.773578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.773855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.773885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.774085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.774115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.774313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.774352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.774626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.774661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.774962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.774991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.380 qpair failed and we were unable to recover it. 00:27:47.380 [2024-07-15 18:42:32.775259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.380 [2024-07-15 18:42:32.775289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.775597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.775628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.775893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.775923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.776119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.776148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.776370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.776400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.776662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.776692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.776947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.776976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.777187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.777217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.777488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.777519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.777816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.777845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.778099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.778128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.778344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.778375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.778638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.778668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.778939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.778970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.779262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.779292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.779528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.779559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.779817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.779846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.780093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.780123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.780299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.780329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.780601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.780631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.780916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.780945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.781140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.781169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.781370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.781401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.781676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.781706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.781904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.781935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.782118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.782148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.782364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.782395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.782666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.782696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.782908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.782939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.783085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.783115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.783391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.783422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.783532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.783561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.783832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.783861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.784085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.784115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.784394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.784424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.784604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.784634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.784929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.784959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.785154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.785184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.785441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.785479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.785603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.785633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.785908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.381 [2024-07-15 18:42:32.785937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.381 qpair failed and we were unable to recover it. 00:27:47.381 [2024-07-15 18:42:32.786213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.786243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.786565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.786596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.786872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.786902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.787079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.787109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.787333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.787372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.787623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.787653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.787849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.787879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.788076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.788105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.788383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.788414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.788610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.788641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.788817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.788846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.789124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.789155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.789429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.789460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.789663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.789692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.789985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.790015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.790230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.790260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.790535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.790566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.790803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.790833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.791108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.791138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.791438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.791469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.791685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.791714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.791988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.792018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.792256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.792286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.792423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.792454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.792740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.792771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.792971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.793001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.793278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.793308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.793597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.793628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.793906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.793936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.794067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.794097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.794222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.794251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.794527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.794558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.794772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.794802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.795100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.795130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.795377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.795407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.795593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.795623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.795868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.795898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.382 qpair failed and we were unable to recover it. 00:27:47.382 [2024-07-15 18:42:32.796152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.382 [2024-07-15 18:42:32.796188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.796391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.796422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.796552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.796582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.796854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.796884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.797182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.797212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.797487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.797519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.797815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.797845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.798122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.798151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.798445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.798476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.798756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.798785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.799063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.799093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.799358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.799390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.799661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.799690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.799961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.799991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.800291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.800321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.800511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.800541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.800787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.800818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.801117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.801146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.801369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.801399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.801665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.801695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.801820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.801849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.802125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.802154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.802329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.802369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.802556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.802586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.802848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.802877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.803057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.803086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.803265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.803294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.803595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.803626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.803839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.803868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.804073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.804101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.804303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.804333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.804626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.804656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.804903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.804932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.805208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.805238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.805488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.805519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.805700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.805729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.805905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.805935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.806209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.806239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.806539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.806570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.383 [2024-07-15 18:42:32.806792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.383 [2024-07-15 18:42:32.806821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.383 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.807086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.807121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.807345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.807377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.807626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.807656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.807834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.807863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.808125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.808154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.808430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.808461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.808588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.808618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.808888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.808917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.809112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.809141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.809420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.809451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.809644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.809674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.809818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.809847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.810051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.810080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.810216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.810245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.810520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.810551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.810834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.810864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.811153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.811182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.811462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.811491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.811710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.811741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.812016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.812045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.812237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.812267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.812553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.812584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.812857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.812887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.813177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.813206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.813394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.813425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.813644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.813674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.813809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.813838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.813969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.814000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.814274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.814303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.814592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.814623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.814750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.814779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.815030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.815059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.815253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.815283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.815544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.815575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.384 qpair failed and we were unable to recover it. 00:27:47.384 [2024-07-15 18:42:32.815851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.384 [2024-07-15 18:42:32.815882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.816171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.816201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.816480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.816511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.816802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.816832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.817092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.817122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.817417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.817449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.817652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.817687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.817937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.817967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.818264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.818293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.818522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.818553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.818771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.818801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.819022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.819053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.819322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.819360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.819572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.819602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.819851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.819881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.820175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.820205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.820482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.820514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.820807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.820837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.821112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.821142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.821437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.821467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.821687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.821717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.821986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.822015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.822312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.822351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.822620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.822650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.822897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.822926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.823120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.823150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.823352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.823383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.823680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.823711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.823887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.823916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.824058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.824087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.824286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.824315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.824590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.824621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.824872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.824901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.825157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.825188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.825439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.825470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.825675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.825704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.825897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.825926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.826138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.385 [2024-07-15 18:42:32.826167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.385 qpair failed and we were unable to recover it. 00:27:47.385 [2024-07-15 18:42:32.826439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.826470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.826676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.826705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.826977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.827006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.827252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.827280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.827487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.827517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.827768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.827797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.827990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.828019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.828210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.828239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.828439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.828475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.828673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.828702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.828896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.828925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.829059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.829088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.829325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.829375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.829652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.829681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.829883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.829913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.830169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.830198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.830386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.830417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.830620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.830649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.830837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.830866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.831136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.831165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.831382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.831412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.831663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.831693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.831954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.831984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.832259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.832288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.832582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.832613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.832888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.832917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.833114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.833143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.833346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.833377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.833629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.833660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.833952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.833982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.834230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.834260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.834452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.834483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.834754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.834785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.835086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.835116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.835368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.835398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.835628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.835658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.835909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.386 [2024-07-15 18:42:32.835939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.386 qpair failed and we were unable to recover it. 00:27:47.386 [2024-07-15 18:42:32.836167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.836196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.836475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.836505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.836701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.836731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.836984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.837013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.837190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.837219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.837493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.837524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.837801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.837830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.838055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.838084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.838206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.838235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.838482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.838514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.838719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.838748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.838999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.839034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.839295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.839324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.839542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.839572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.839827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.839856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.840051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.840080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.840282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.840311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.840597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.840628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.840808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.840837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.841112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.841141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.841403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.841433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.841632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.841662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.841915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.841944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.842199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.842228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.842420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.842451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.842734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.842764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.842964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.842994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.843233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.843262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.843538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.843569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.843765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.843794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.843996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.844025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.844326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.844364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.844565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.844594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.844872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.844902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.845110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.845140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.845334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.845391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.845501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.845531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.387 [2024-07-15 18:42:32.845780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.387 [2024-07-15 18:42:32.845809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.387 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.846129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.846160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.846423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.846454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.846757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.846787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.847060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.847089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.847386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.847416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.847692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.847722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.847940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.847969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.848098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.848128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.848428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.848459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.848661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.848691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.848953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.848983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.849243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.849272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.849574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.849605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.849894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.849929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.850226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.850255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.850531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.850563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.850814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.850843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.851106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.851135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.851380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.851411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.851628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.851657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.851921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.851950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.852145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.852175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.852451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.852481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.852665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.852695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.852894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.852924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.853199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.853228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.853511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.853541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.853834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.853864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.854144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.854173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.854461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.854492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.854710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.854740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.854917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.854946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.855222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.855252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.855432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.855462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.855741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.855771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.856025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.856055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.856255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.856285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.388 [2024-07-15 18:42:32.856517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.388 [2024-07-15 18:42:32.856548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.388 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.856749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-07-15 18:42:32.856779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.856970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-07-15 18:42:32.856999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.857357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-07-15 18:42:32.857433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.857656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-07-15 18:42:32.857690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.857941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-07-15 18:42:32.857972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.858273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-07-15 18:42:32.858304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.858594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-07-15 18:42:32.858625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.858825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-07-15 18:42:32.858855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.859051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-07-15 18:42:32.859080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.859356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-07-15 18:42:32.859388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.859665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-07-15 18:42:32.859695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.859913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-07-15 18:42:32.859943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.860140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-07-15 18:42:32.860170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.860392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-07-15 18:42:32.860424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.860650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-07-15 18:42:32.860680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.860872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-07-15 18:42:32.860901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.861135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-07-15 18:42:32.861166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.861418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-07-15 18:42:32.861448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.861743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-07-15 18:42:32.861772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.862051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-07-15 18:42:32.862081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.862205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-07-15 18:42:32.862235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.862505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-07-15 18:42:32.862536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.862737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-07-15 18:42:32.862767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.862967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-07-15 18:42:32.862997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.863246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-07-15 18:42:32.863276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.863468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-07-15 18:42:32.863498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.863678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-07-15 18:42:32.863707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.863957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-07-15 18:42:32.863987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.864218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-07-15 18:42:32.864248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.864427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-07-15 18:42:32.864464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.864670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-07-15 18:42:32.864700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.864974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.389 [2024-07-15 18:42:32.865004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.389 qpair failed and we were unable to recover it. 00:27:47.389 [2024-07-15 18:42:32.865206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.865235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.865480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.865511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.865783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.865815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.866109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.866139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.866368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.866400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.866666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.866696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.866915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.866945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.867194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.867225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.867478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.867508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.867703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.867733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.868010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.868040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.868327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.868373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.868584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.868614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.868807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.868838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.869113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.869144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.869366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.869397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.869576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.869606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.869877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.869906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.870122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.870152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.870403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.870434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.870704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.870734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.870950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.870979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.871256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.871286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.871532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.871563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.871832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.871863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.872164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.872193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.872334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.872373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.872565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.872595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.872817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.872847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.873040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.873070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.873262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.873292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.873500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.873531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.873664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.873693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.873910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.873942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.874143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.874174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.874425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.874456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.874645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.874674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.390 qpair failed and we were unable to recover it. 00:27:47.390 [2024-07-15 18:42:32.874853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.390 [2024-07-15 18:42:32.874883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.875022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.875057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.875257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.875286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.875482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.875514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.875721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.875751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.875952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.875981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.876192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.876221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.876491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.876522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.876823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.876853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.877036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.877065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.877349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.877381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.877649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.877679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.877861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.877890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.878171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.878201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.878394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.878426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.878688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.878718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.879010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.879041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.879323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.879361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.879569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.879599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.879853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.879883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.880064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.880094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.880370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.880401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.880693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.880723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.880998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.881027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.881277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.881307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.881597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.881629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.881821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.881851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.882044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.882074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.882197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.882233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.882426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.882457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.882644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.882673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.882923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.882952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.883255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.883285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.883409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.883440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.883736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.883766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.883960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.883990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.884201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.884231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.884511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.884542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.391 qpair failed and we were unable to recover it. 00:27:47.391 [2024-07-15 18:42:32.884682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.391 [2024-07-15 18:42:32.884712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.884983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.885012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.885210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.885241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.885423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.885454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.885669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.885698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.885973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.886003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.886298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.886329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.886564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.886595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.886783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.886813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.887061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.887091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.887282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.887312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.887603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.887634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.887914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.887944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.888165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.888195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.888454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.888486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.888743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.888773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.889021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.889051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.889326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.889388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.889608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.889638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.889829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.889858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.890153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.890184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.890462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.890493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.890785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.890815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.891038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.891067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.891350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.891381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.891592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.891623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.891857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.891886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.892154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.892184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.892385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.892417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.892672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.892703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.892903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.892933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.893110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.893146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.893398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.893429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.893678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.893708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.893959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.893990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.894240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.894271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.894565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.894596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.894900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.894930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.392 [2024-07-15 18:42:32.895197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.392 [2024-07-15 18:42:32.895227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.392 qpair failed and we were unable to recover it. 00:27:47.393 [2024-07-15 18:42:32.895430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.393 [2024-07-15 18:42:32.895461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.393 qpair failed and we were unable to recover it. 00:27:47.393 [2024-07-15 18:42:32.895651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.393 [2024-07-15 18:42:32.895681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.393 qpair failed and we were unable to recover it. 00:27:47.393 [2024-07-15 18:42:32.895951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.393 [2024-07-15 18:42:32.895982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.393 qpair failed and we were unable to recover it. 00:27:47.393 [2024-07-15 18:42:32.896208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.393 [2024-07-15 18:42:32.896239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.393 qpair failed and we were unable to recover it. 00:27:47.393 [2024-07-15 18:42:32.896509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.393 [2024-07-15 18:42:32.896541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.393 qpair failed and we were unable to recover it. 00:27:47.393 [2024-07-15 18:42:32.896811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.393 [2024-07-15 18:42:32.896841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.393 qpair failed and we were unable to recover it. 00:27:47.393 [2024-07-15 18:42:32.897139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.393 [2024-07-15 18:42:32.897170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.393 qpair failed and we were unable to recover it. 00:27:47.393 [2024-07-15 18:42:32.897466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.393 [2024-07-15 18:42:32.897497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.393 qpair failed and we were unable to recover it. 00:27:47.393 [2024-07-15 18:42:32.897775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.393 [2024-07-15 18:42:32.897805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.393 qpair failed and we were unable to recover it. 00:27:47.393 [2024-07-15 18:42:32.897926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.393 [2024-07-15 18:42:32.897956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.393 qpair failed and we were unable to recover it. 00:27:47.393 [2024-07-15 18:42:32.898149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.393 [2024-07-15 18:42:32.898180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.393 qpair failed and we were unable to recover it. 00:27:47.393 [2024-07-15 18:42:32.898479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.393 [2024-07-15 18:42:32.898511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.393 qpair failed and we were unable to recover it. 00:27:47.393 [2024-07-15 18:42:32.898804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.393 [2024-07-15 18:42:32.898833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.393 qpair failed and we were unable to recover it. 00:27:47.393 [2024-07-15 18:42:32.899011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.393 [2024-07-15 18:42:32.899042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.393 qpair failed and we were unable to recover it. 00:27:47.393 [2024-07-15 18:42:32.899256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.393 [2024-07-15 18:42:32.899286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.393 qpair failed and we were unable to recover it. 00:27:47.393 [2024-07-15 18:42:32.899547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.393 [2024-07-15 18:42:32.899579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.393 qpair failed and we were unable to recover it. 00:27:47.393 [2024-07-15 18:42:32.899825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.393 [2024-07-15 18:42:32.899855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.393 qpair failed and we were unable to recover it. 00:27:47.393 [2024-07-15 18:42:32.900046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.393 [2024-07-15 18:42:32.900076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.393 qpair failed and we were unable to recover it. 00:27:47.393 [2024-07-15 18:42:32.900378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.393 [2024-07-15 18:42:32.900409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.393 qpair failed and we were unable to recover it. 00:27:47.393 [2024-07-15 18:42:32.900700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.393 [2024-07-15 18:42:32.900735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.393 qpair failed and we were unable to recover it. 00:27:47.393 [2024-07-15 18:42:32.900999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.393 [2024-07-15 18:42:32.901029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.393 qpair failed and we were unable to recover it. 00:27:47.393 [2024-07-15 18:42:32.901295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.393 [2024-07-15 18:42:32.901324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.393 qpair failed and we were unable to recover it. 00:27:47.393 [2024-07-15 18:42:32.901627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.393 [2024-07-15 18:42:32.901657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.393 qpair failed and we were unable to recover it. 00:27:47.393 [2024-07-15 18:42:32.901931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.393 [2024-07-15 18:42:32.901962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.393 qpair failed and we were unable to recover it. 00:27:47.393 [2024-07-15 18:42:32.902256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.393 [2024-07-15 18:42:32.902286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.393 qpair failed and we were unable to recover it. 00:27:47.393 [2024-07-15 18:42:32.902567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.393 [2024-07-15 18:42:32.902598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.393 qpair failed and we were unable to recover it. 00:27:47.393 [2024-07-15 18:42:32.902737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.393 [2024-07-15 18:42:32.902767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.393 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.902967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.902998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.903274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.903304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.903514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.903547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.903798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.903828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.904104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.904136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.904421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.904452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.904709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.904739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.904847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.904877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.905165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.905195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.905470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.905502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.905790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.905819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.905950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.905979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.906160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.906189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.906406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.906438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.906571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.906600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.906726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.906757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.907006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.907035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.907335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.907376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.907638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.907670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.907968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.907998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.908193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.908224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.908406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.908438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.908712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.908743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.908991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.909020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.909217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.909247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.909426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.909457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.909733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.909763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.909966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.909996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.910207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.910237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.910417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.910448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.910701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.910731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.910921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.910950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.911223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.911253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.911543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.911579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.911831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.911862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.912170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.912199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.912467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.912498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.912697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.912728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.912935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.912964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.913162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.672 [2024-07-15 18:42:32.913192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.672 qpair failed and we were unable to recover it. 00:27:47.672 [2024-07-15 18:42:32.913388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.913419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.913620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.913649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.913920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.913951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.914160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.914189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.914388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.914419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.914619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.914649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.914917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.914947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.915152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.915182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.915329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.915369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.915580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.915610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.915822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.915851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.916036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.916066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.916362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.916394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.916612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.916642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.916772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.916801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.917019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.917049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.917229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.917259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.917574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.917605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.917875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.917905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.918111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.918141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.918399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.918435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.918700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.918731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.918908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.918938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.919148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.919176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.919453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.919484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.919734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.919763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.920030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.920059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.920363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.920395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.920606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.920636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.920856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.920885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.921083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.921112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.921250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.921277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.921549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.921578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.921787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.921815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.922069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.922098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.922222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.922250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.922472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.922501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.922773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.922801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.923102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.923130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.923257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.923285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.923573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.673 [2024-07-15 18:42:32.923602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.673 qpair failed and we were unable to recover it. 00:27:47.673 [2024-07-15 18:42:32.923793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.923821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.924094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.924122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.924312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.924354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.924617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.924645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.924938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.924967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.925254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.925283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.925476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.925506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.925784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.925813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.926009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.926038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.926174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.926203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.926473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.926503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.926723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.926753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.927021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.927051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.927324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.927380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.927621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.927654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.927950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.927980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.928178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.928207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.928393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.928424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.928633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.928663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.928949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.928978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.929231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.929269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.929573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.929605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.929878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.929908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.930199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.930228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.930514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.930546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.930824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.930854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.931053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.931084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.931363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.931394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.931671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.931701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.931892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.931922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.932184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.932213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.932360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.932392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.932572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.932602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.932815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.932844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.933023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.933052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.933253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.933282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.933553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.933584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.933858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.933888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.934183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.934213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.934441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.934472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.934668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.674 [2024-07-15 18:42:32.934697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.674 qpair failed and we were unable to recover it. 00:27:47.674 [2024-07-15 18:42:32.934965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.934996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.935293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.935323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.935551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.935582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.935833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.935862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.936128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.936158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.936410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.936441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.936743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.936772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.937042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.937072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.937279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.937309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.937513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.937545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.937768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.937799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.937990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.938019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.938283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.938313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.938527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.938558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.938739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.938769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.939024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.939054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.939332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.939375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.939555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.939585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.939849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.939878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.940130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.940160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.940418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.940451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.940706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.940737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.940919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.940949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.941255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.941284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.941485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.941517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.941767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.941797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.942014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.942043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.942301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.942331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.942626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.942657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.942859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.942890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.943016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.943046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.943321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.943361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.943639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.943670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.943946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.943977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.944129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.944160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.944407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.944438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.944583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.944614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.944935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.944965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.945158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.945188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.945466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.945497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.945696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.945726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.675 qpair failed and we were unable to recover it. 00:27:47.675 [2024-07-15 18:42:32.945909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.675 [2024-07-15 18:42:32.945939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.946264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.946294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.946488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.946519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.946779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.946809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.947077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.947106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.947315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.947355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.947628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.947664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.947883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.947913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.948163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.948194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.948373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.948403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.948700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.948730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.948920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.948950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.949201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.949231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.949411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.949443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.949588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.949618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.949890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.949920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.950176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.950206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.950460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.950491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.950686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.950716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.950993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.951023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.951217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.951248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.951428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.951459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.951600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.951630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.951818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.951848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.952053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.952083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.952281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.952312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.952595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.952626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.952877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.952908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.953217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.953247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.953511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.953542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.953847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.953878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.954143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.954173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.954361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.954392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.954671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.676 [2024-07-15 18:42:32.954701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.676 qpair failed and we were unable to recover it. 00:27:47.676 [2024-07-15 18:42:32.954896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.954926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.955065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.955095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.955229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.955258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.955527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.955557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.955737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.955765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.956014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.956043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.956294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.956323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.956640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.956671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.956955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.956984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.957267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.957298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.957532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.957564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.957772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.957802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.958055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.958084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.958290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.958321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.958612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.958642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.958842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.958872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.959146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.959175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.959460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.959491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.959779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.959809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.960014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.960044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.960259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.960288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.960548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.960579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.960776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.960807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.961083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.961113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.961336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.961375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.961554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.961584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.961855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.961885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.962090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.962120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.962378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.962410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.962671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.962701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.962998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.963029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.963331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.963372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.963567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.963597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.963896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.963926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.964104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.964134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.964328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.964369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.964649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.964679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.964870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.964899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.965163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.965193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.965492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.965523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.965795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.965830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.677 [2024-07-15 18:42:32.966023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.677 [2024-07-15 18:42:32.966053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.677 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.966322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.966362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.966555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.966585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.966800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.966831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.967029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.967058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.967318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.967355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.967557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.967587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.967766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.967795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.968009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.968038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.968239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.968269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.968551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.968582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.968858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.968888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.969094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.969123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.969270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.969301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.969589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.969620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.969895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.969925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.970161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.970191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.970440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.970471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.970747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.970777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.971067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.971097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.971356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.971387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.971684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.971713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.972003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.972033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.972307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.972348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.972611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.972641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.972941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.972971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.973193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.973223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.973509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.973541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.973765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.973795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.974074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.974104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.974390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.974422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.974704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.974734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.974960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.974990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.975261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.975292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.975496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.975526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.975772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.975802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.975998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.976028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.976205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.976234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.976381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.976413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.976599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.976628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.976809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.976844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.678 [2024-07-15 18:42:32.977150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.678 [2024-07-15 18:42:32.977180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.678 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.977376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.977409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.977709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.977740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.977918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.977947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.978142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.978174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.978373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.978404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.978680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.978710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.978835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.978865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.979131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.979162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.979358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.979389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.979637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.979666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.979856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.979885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.980157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.980188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.980311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.980350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.980628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.980658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.980906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.980936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.981207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.981236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.981452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.981483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.981619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.981648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.981919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.981948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.982150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.982180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.982442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.982473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.982671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.982701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.982996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.983026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.983216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.983246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.983519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.983550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.983690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.983726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.983980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.984010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.984307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.984346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.984617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.984647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.984897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.984926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.985109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.985139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.985357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.985388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.985585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.985615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.985817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.985848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.986094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.986123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.986336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.986387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.986646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.986676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.986816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.986846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.987029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.987060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.987351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.987383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.987642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.679 [2024-07-15 18:42:32.987672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.679 qpair failed and we were unable to recover it. 00:27:47.679 [2024-07-15 18:42:32.987861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.987890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.988083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.988113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.988364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.988396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.988650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.988679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.988893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.988923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.989195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.989225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.989441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.989472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.989699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.989729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.990004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.990033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.990281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.990310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.990619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.990650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.990940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.990969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.991248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.991279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.991572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.991604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.991881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.991911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.992178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.992207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.992393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.992425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.992678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.992707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.992903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.992933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.993214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.993244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.993379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.993409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.993587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.993617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.993893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.993923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.994171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.994200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.994451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.994483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.994759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.994794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.995076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.995106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.995306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.995335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.995645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.995675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.995945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.995975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.996127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.996157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.996412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.996444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.996633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.996664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.996854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.996884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.997184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.997214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.680 [2024-07-15 18:42:32.997488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.680 [2024-07-15 18:42:32.997519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.680 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:32.997717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:32.997748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:32.998008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:32.998038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:32.998331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:32.998380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:32.998639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:32.998669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:32.998946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:32.998976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:32.999250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:32.999280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:32.999480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:32.999512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:32.999702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:32.999732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.000048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.000078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.000361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.000393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.000663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.000693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.001010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.001040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.001297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.001327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.001627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.001659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.001932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.001962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.002163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.002194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.002404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.002441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.002695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.002726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.003007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.003037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.003286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.003317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.003574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.003605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.003846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.003878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.004125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.004156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.004373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.004405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.004557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.004587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.004848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.004878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.005076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.005106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.005368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.005400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.005700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.005731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.006009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.006040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.006253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.006284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.006561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.006592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.006725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.006756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.006976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.007007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.007218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.007250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.007464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.007496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.007767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.007798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.008097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.008128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.008405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.008438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.008642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.008673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.008902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.681 [2024-07-15 18:42:33.008933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.681 qpair failed and we were unable to recover it. 00:27:47.681 [2024-07-15 18:42:33.009186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.009216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.009497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.009529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.009792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.009824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.010097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.010128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.010378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.010410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.010714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.010745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.010996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.011027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.011231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.011262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.011543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.011576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.011768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.011799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.011988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.012019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.012292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.012322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.012622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.012655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.012850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.012881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.013127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.013157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.013438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.013470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.013751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.013788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.013984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.014014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.014276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.014306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.014493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.014525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.014665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.014696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.014817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.014846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.015117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.015145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.015285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.015313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.015529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.015561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.015762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.015793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.015986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.016016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.016289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.016319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.016470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.016501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.016778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.016809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.017102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.017133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.017416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.017447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.017649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.017680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.017860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.017890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.018178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.018208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.018401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.018434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.018660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.018691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.018894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.018924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.019214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.019245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.019369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.019401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.019593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.019623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.682 [2024-07-15 18:42:33.019872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.682 [2024-07-15 18:42:33.019902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.682 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.020081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.020111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.020411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.020449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.020698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.020728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.021044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.021075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.021280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.021310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.021597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.021628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.021830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.021860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.022039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.022069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.022366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.022398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.022663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.022693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.022973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.023003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.023190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.023220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.023544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.023576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.023770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.023800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.023980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.024010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.024143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.024175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.024395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.024427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.024727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.024757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.025031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.025062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.025361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.025394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.025687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.025717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.025900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.025930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.026198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.026228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.026423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.026456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.026651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.026682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.026869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.026898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.027092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.027121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.027416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.027448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.027672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.027702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.027958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.027989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.028259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.028290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.028540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.028573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.028852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.028884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.029031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.029062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.029356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.029388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.029594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.029625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.029846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.029876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.030056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.030086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.030360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.030393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.030671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.030702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.030981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.683 [2024-07-15 18:42:33.031012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.683 qpair failed and we were unable to recover it. 00:27:47.683 [2024-07-15 18:42:33.031300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.031330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.031661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.031702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.031925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.031955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.032142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.032171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.032446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.032479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.032625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.032654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.032904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.032933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.033115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.033144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.033395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.033425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.033545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.033575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.033774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.033805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.033948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.033979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.034160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.034190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.034480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.034512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.034729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.034761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.034888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.034918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.035051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.035082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.035214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.035244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.035439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.035469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.035646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.035675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.035892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.035923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.036199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.036229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.036430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.036463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.036658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.036689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.036809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.036839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.036980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.037010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.037286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.037316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.037515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.037546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.037737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.037767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.038047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.038078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.038288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.038319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.038606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.038638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.038842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.038873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.039121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.039151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.039358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.039390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.039522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.684 [2024-07-15 18:42:33.039553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.684 qpair failed and we were unable to recover it. 00:27:47.684 [2024-07-15 18:42:33.039802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.039833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.040089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.040120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.040258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.040289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.040485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.040517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.040766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.040797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.040991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.041020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.041168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.041198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.041420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.041453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.041587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.041617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.041747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.041777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.042030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.042059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.042276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.042305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.042530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.042562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.042838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.042868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.043141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.043171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.043381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.043413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.043621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.043652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.043855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.043886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.044079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.044107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.044242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.044270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.044549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.044582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.044832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.044862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.045139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.045169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.045389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.045420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.045608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.045639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.045839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.045868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.046134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.046164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.046293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.046324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.046600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.046632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.046830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.046860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.047057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.047087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.047288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.047317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.047450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.047481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.047675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.047710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.047987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.048016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.048287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.048316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.048469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.048501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.048716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.048746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.049043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.049073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.049278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.049309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.049499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.685 [2024-07-15 18:42:33.049530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.685 qpair failed and we were unable to recover it. 00:27:47.685 [2024-07-15 18:42:33.049734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.049764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.049955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.049985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.050201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.050230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.050443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.050475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.050691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.050722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.050980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.051010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.051144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.051175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.051446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.051478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.051726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.051757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.052006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.052036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.052283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.052313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.052510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.052541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.052741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.052771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.052975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.053005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.053279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.053310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.053465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.053495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.053763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.053793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.054010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.054040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.054231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.054261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.054474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.054506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.054702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.054732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.054978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.055008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.055205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.055235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.055373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.055403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.055663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.055693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.055890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.055920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.056100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.056131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.056363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.056394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.056693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.056724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.056997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.057027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.057245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.057274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.057520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.057551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.057673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.057704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.057917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.057952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.058133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.058163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.058381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.058414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.058594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.058624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.058818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.058848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.059123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.059153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.059438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.059469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.059693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.059722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.059897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.686 [2024-07-15 18:42:33.059927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.686 qpair failed and we were unable to recover it. 00:27:47.686 [2024-07-15 18:42:33.060116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.060146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.060418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.060449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.060705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.060735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.060934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.060965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.061140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.061170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.061357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.061389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.061671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.061701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.061842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.061872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.062140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.062170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.062390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.062422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.062614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.062644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.062892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.062921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.063117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.063146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.063401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.063433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.063611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.063641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.063917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.063947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.064237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.064266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.064537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.064569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.064793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.064829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.065094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.065125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.065387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.065418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.065667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.065697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.065896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.065926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.066053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.066083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.066281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.066311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.066530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.066561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.066756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.066785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.067058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.067088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.067359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.067390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.067687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.067718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.067925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.067955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.068181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.068212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.068359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.068390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.068509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.068538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.068736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.068767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.069040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.069070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.069260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.069291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.069549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.069580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.069768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.069798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.070068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.070098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.070391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.070423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.687 qpair failed and we were unable to recover it. 00:27:47.687 [2024-07-15 18:42:33.070638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.687 [2024-07-15 18:42:33.070669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.070813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.070843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.071138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.071168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.071424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.071456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.071707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.071737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.071935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.071965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.072147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.072178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.072357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.072390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.072640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.072670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.072919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.072949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.073196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.073227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.073347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.073378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.073598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.073629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.073902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.073933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.074121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.074151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.074424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.074456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.074658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.074689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.074932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.074963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.075209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.075246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.075507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.075538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.075727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.075759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.075959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.075990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.076267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.076298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.076505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.076536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.076686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.076716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.076928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.076959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.077147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.077177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.077431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.077464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.077763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.077794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.077995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.078026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.078304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.078334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.078629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.078660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.078937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.078968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.079187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.079217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.079416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.079449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.079703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.079733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.079932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.079962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.080139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.080170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.080469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.080501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.080708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.080739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.080936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.080967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.081220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.688 [2024-07-15 18:42:33.081251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.688 qpair failed and we were unable to recover it. 00:27:47.688 [2024-07-15 18:42:33.081477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.081509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.081784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.081813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.082053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.082083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.082353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.082391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.082599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.082629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.082781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.082811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.083079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.083110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.083332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.083373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.083649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.083679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.083885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.083915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.084129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.084159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.084455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.084487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.084769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.084800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.085003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.085033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.085233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.085264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.085464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.085496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.085630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.085660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.085938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.085969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.086169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.086199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.086481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.086512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.086705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.086736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.086952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.086982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.087188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.087218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.087519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.087551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.087689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.087720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.087990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.088020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.088164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.088194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.088395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.088427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.088699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.088730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.089013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.089044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.089233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.089264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.089528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.089559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.089806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.089836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.090118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.090148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.090424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.090456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.090675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.689 [2024-07-15 18:42:33.090706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.689 qpair failed and we were unable to recover it. 00:27:47.689 [2024-07-15 18:42:33.090994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.091025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.091274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.091304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.091515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.091547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.091724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.091755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.092029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.092060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.092308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.092349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.092547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.092578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.092878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.092907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.093082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.093118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.093377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.093408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.093706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.093737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.093933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.093964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.094178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.094207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.094468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.094499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.094698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.094727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.094999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.095029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.095298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.095328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.095633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.095664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.095859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.095889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.096155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.096184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.096485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.096516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.096787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.096817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.097083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.097114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.097423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.097454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.097579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.097609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.097899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.097929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.098126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.098157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.098409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.098441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.098716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.098746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.099067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.099097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.099296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.099326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.099626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.099657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.099782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.099813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.100033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.100062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.100257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.100287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.100549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.100587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.100886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.100916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.101183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.101214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.101512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.101543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.101818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.101849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.102050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.102080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.690 [2024-07-15 18:42:33.102354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.690 [2024-07-15 18:42:33.102385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.690 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.102680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.102711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.102979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.103011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.103262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.103292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.103506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.103537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.103808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.103839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.104140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.104171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.104441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.104472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.104731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.104762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.105062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.105093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.105388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.105419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.105695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.105726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.105927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.105957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.106147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.106178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.106396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.106427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.106707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.106737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.106943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.106974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.107226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.107257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.107449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.107481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.107729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.107759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.108006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.108037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.108228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.108258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.108539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.108571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.108854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.108885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.109145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.109176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.109473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.109505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.109779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.109809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.110026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.110057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.110320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.110359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.110654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.110684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.110954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.110985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.111224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.111255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.111506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.111538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.111829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.111859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.112110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.112139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.112330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.112376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.112626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.112657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.112905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.112935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.113134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.113164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.113418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.113449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.113650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.113680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.691 [2024-07-15 18:42:33.113955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.691 [2024-07-15 18:42:33.113985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.691 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.114270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.114300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.114510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.114542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.114822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.114853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.115126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.115157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.115427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.115458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.115677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.115707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.115921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.115952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.116152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.116183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.116361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.116394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.116665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.116695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.116941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.116970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.117248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.117277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.117486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.117516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.117785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.117815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.118112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.118142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.118440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.118472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.118750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.118781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.119070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.119100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.119375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.119407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.119697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.119728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.120006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.120036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.120246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.120276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.120482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.120514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.120781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.120811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.120945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.120975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.121242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.121271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.121482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.121513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.121759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.121789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.122062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.122091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.122390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.122421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.122640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.122669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.122809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.122839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.123108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.123138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.123413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.123444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.123747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.123778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.124047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.124077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.124188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.124218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.124349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.124380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.124528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.124558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.124705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.124735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.125002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.692 [2024-07-15 18:42:33.125032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.692 qpair failed and we were unable to recover it. 00:27:47.692 [2024-07-15 18:42:33.125283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.125314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.125531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.125562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.125828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.125858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.126154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.126184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.126461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.126493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.126773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.126803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.127004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.127035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.127290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.127320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.127578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.127610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.127797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.127828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.128078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.128108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.128357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.128389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.128621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.128652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.128915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.128946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.129197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.129228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.129374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.129406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.129652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.129682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.129876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.129906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.130177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.130208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.130501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.130533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.130732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.130767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.131048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.131078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.131270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.131300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.131518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.131550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.131798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.131828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.132106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.132136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.132314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.132356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.132543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.132573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.132828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.132858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.133106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.133137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.133361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.133392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.133610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.133640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.133849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.133879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.134077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.134108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.134370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.134403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.693 [2024-07-15 18:42:33.134605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.693 [2024-07-15 18:42:33.134636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.693 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.134812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.134843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.135031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.135063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.135280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.135310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.135609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.135655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.135851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.135882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.136130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.136160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.136410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.136441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.136720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.136749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.136949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.136979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.137177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.137207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.137481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.137512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.137640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.137671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.137930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.137961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.138236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.138266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.138545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.138577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.138788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.138817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.139064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.139094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.139371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.139403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.139694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.139725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.139974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.140004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.140183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.140213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.140460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.140492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.140673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.140703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.140998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.141028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.141226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.141256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.141529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.141564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.141776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.141806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.141942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.141972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.142159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.142187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.142460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.142491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.142691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.142720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.142992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.143022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.143313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.143352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.143496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.143528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.143746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.143775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.144088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.144119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.144382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.144413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.144723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.144753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.145038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.145068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.145276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.145306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.145510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.694 [2024-07-15 18:42:33.145541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.694 qpair failed and we were unable to recover it. 00:27:47.694 [2024-07-15 18:42:33.145743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.145774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.145988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.146018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.146211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.146241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.146506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.146538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.146733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.146763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.146951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.146982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.147159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.147189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.147375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.147406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.147703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.147735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.147924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.147954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.148156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.148186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.148387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.148423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.148719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.148750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.148933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.148963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.149231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.149262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.149539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.149571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.149791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.149821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.150070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.150100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.150358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.150390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.150641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.150673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.150881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.150911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.151134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.151164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.151436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.151468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.151688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.151718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.151977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.152008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.152270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.152301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.152511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.152542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.152734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.152764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.153022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.153052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.153251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.153281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.153468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.153499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.153632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.153662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.153913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.153944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.154219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.154250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.154439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.154470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.154738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.154769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.155064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.155094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.155378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.155409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.155664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.155693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.155893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.155925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.156194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.156225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.695 qpair failed and we were unable to recover it. 00:27:47.695 [2024-07-15 18:42:33.156405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.695 [2024-07-15 18:42:33.156437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.156616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.156646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.156775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.156805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.157100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.157130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.157401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.157432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.157627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.157657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.157909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.157939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.158161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.158191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.158314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.158366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.158621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.158651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.158852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.158883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.159109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.159150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.159416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.159448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.159644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.159674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.159951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.159981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.160254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.160284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.160559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.160590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.160715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.160745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.160923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.160953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.161233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.161262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.161442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.161473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.161663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.161693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.161893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.161923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.162128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.162158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.162357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.162388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.162588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.162618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.162835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.162865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.163074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.163104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.163321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.163363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.163574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.163604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.163726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.163756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.164006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.164035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.164241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.164271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.164490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.164522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.164657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.164687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.164977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.165007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.696 [2024-07-15 18:42:33.165212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.696 [2024-07-15 18:42:33.165242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.696 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.165369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.165400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.165597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.165632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.165923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.165953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.166087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.166117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.166413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.166443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.166584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.166614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.166800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.166830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.167025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.167053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.167379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.167411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.167615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.167646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.167838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.167867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.168053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.168082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.168281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.168311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.168577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.168609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.168814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.168844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.169120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.169151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.169401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.169433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.169638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.169668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.169921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.169951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.170228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.170257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.170550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.170581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.170764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.170793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.171048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.171078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.171360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.171391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.171679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.171709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.172009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.172039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.172333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.172373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.172664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.172694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.172967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.172998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.173186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.173216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.173408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.173439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.173712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.173743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.173930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.173961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.174232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.697 [2024-07-15 18:42:33.174262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.697 qpair failed and we were unable to recover it. 00:27:47.697 [2024-07-15 18:42:33.174512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.174544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.174757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.174788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.174991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.175021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.175286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.175316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.175613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.175644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.175926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.175955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.176217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.176247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.176439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.176470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.176660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.176695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.176883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.176912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.177163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.177193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.177487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.177519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.177815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.177845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.178097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.178128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.178302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.178331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.178591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.178623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.178919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.178949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.179224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.179255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.179552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.179583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.179878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.179908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.180183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.180213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.180431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.180463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.180740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.180770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.181045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.181076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.181365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.181397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.181651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.181682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.181957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.181987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.182114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.182145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.182335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.182374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.182648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.182679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.182873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.182904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.183170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.183200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.183500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.183532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.183800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.183830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.183972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.184002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.184247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.184284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.184564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.184596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.698 [2024-07-15 18:42:33.184870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.698 [2024-07-15 18:42:33.184899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.698 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.185080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.185110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.185299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.185329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.185590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.185620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.185885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.185914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.186105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.186135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.186359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.186390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.186649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.186679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.186886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.186916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.187162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.187193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.187440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.187472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.187580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.187610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.187739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.187769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.187953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.187983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.188252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.188282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.188427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.188459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.188612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.188641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.188834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.188864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.189066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.189097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.189292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.189321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.189519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.189549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.189800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.189829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.190085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.190115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.190301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.190331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.190589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.190620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.190832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.190862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.191045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.191075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.191196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.191225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.191461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.191493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.191690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.191722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.191903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.191932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.192152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.192182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.192389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.192420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.192610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.192640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.192843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.192872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.193081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.193110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.193299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.193328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.193542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.193573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.193847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.193877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.194169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.194204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.699 [2024-07-15 18:42:33.194487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.699 [2024-07-15 18:42:33.194518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.699 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.194719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.194748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.195035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.195064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.195261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.195291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.195509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.195539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.195787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.195816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.195994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.196024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.196285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.196314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.196575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.196607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.196736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.196765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.196969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.196998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.197125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.197154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.197361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.197392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.197518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.197548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.197751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.197781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.197967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.197996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.198217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.198247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.198442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.198473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.198583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.198613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.198719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.198748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.198934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.198964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.199251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.199280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.199498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.199530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.199724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.199753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.199945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.199974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.200175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.200205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.200320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.200361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.200574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.200605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.200854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.200883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.201015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.201045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.201219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.201249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.201439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.201469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.201605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.201635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.201764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.201795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.201923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.201952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.202135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.202164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.202359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.202391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.202567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.202597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.202726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.202756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.700 qpair failed and we were unable to recover it. 00:27:47.700 [2024-07-15 18:42:33.202908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.700 [2024-07-15 18:42:33.202938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.701 qpair failed and we were unable to recover it. 00:27:47.701 [2024-07-15 18:42:33.203071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.701 [2024-07-15 18:42:33.203102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.701 qpair failed and we were unable to recover it. 00:27:47.701 [2024-07-15 18:42:33.203375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.701 [2024-07-15 18:42:33.203407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.701 qpair failed and we were unable to recover it. 00:27:47.701 [2024-07-15 18:42:33.203658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.701 [2024-07-15 18:42:33.203688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.701 qpair failed and we were unable to recover it. 00:27:47.701 [2024-07-15 18:42:33.203818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.701 [2024-07-15 18:42:33.203847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.701 qpair failed and we were unable to recover it. 00:27:47.701 [2024-07-15 18:42:33.203968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.701 [2024-07-15 18:42:33.203997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.701 qpair failed and we were unable to recover it. 00:27:47.701 [2024-07-15 18:42:33.204131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.701 [2024-07-15 18:42:33.204161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.701 qpair failed and we were unable to recover it. 00:27:47.701 [2024-07-15 18:42:33.204270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.701 [2024-07-15 18:42:33.204299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.701 qpair failed and we were unable to recover it. 00:27:47.701 [2024-07-15 18:42:33.204500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.701 [2024-07-15 18:42:33.204531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.701 qpair failed and we were unable to recover it. 00:27:47.701 [2024-07-15 18:42:33.204781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.701 [2024-07-15 18:42:33.204811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.701 qpair failed and we were unable to recover it. 00:27:47.701 [2024-07-15 18:42:33.204929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.701 [2024-07-15 18:42:33.204958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.701 qpair failed and we were unable to recover it. 00:27:47.701 [2024-07-15 18:42:33.205151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.701 [2024-07-15 18:42:33.205181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.701 qpair failed and we were unable to recover it. 00:27:47.701 [2024-07-15 18:42:33.205407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.701 [2024-07-15 18:42:33.205439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.701 qpair failed and we were unable to recover it. 00:27:47.701 [2024-07-15 18:42:33.205630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.701 [2024-07-15 18:42:33.205659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.701 qpair failed and we were unable to recover it. 00:27:47.701 [2024-07-15 18:42:33.205907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.701 [2024-07-15 18:42:33.205937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.701 qpair failed and we were unable to recover it. 00:27:47.701 [2024-07-15 18:42:33.206133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.701 [2024-07-15 18:42:33.206163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.701 qpair failed and we were unable to recover it. 00:27:47.701 [2024-07-15 18:42:33.206347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.701 [2024-07-15 18:42:33.206379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.701 qpair failed and we were unable to recover it. 00:27:47.701 [2024-07-15 18:42:33.206656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.701 [2024-07-15 18:42:33.206686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.701 qpair failed and we were unable to recover it. 00:27:47.701 [2024-07-15 18:42:33.206937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.701 [2024-07-15 18:42:33.206967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.701 qpair failed and we were unable to recover it. 00:27:47.701 [2024-07-15 18:42:33.207159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.701 [2024-07-15 18:42:33.207189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.701 qpair failed and we were unable to recover it. 00:27:47.701 [2024-07-15 18:42:33.207379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.701 [2024-07-15 18:42:33.207409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.701 qpair failed and we were unable to recover it. 00:27:47.701 [2024-07-15 18:42:33.207595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.701 [2024-07-15 18:42:33.207624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.701 qpair failed and we were unable to recover it. 00:27:47.701 [2024-07-15 18:42:33.207832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.701 [2024-07-15 18:42:33.207861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.701 qpair failed and we were unable to recover it. 00:27:47.701 [2024-07-15 18:42:33.208150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.701 [2024-07-15 18:42:33.208179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.701 qpair failed and we were unable to recover it. 00:27:47.701 [2024-07-15 18:42:33.208421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.701 [2024-07-15 18:42:33.208451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.701 qpair failed and we were unable to recover it. 00:27:47.701 [2024-07-15 18:42:33.208699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.701 [2024-07-15 18:42:33.208729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.701 qpair failed and we were unable to recover it. 00:27:47.701 [2024-07-15 18:42:33.208919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.701 [2024-07-15 18:42:33.208949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.701 qpair failed and we were unable to recover it. 00:27:47.701 [2024-07-15 18:42:33.209153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.701 [2024-07-15 18:42:33.209183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.701 qpair failed and we were unable to recover it. 00:27:47.701 [2024-07-15 18:42:33.209431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.701 [2024-07-15 18:42:33.209474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.701 qpair failed and we were unable to recover it. 00:27:47.980 [2024-07-15 18:42:33.209735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.980 [2024-07-15 18:42:33.209765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.980 qpair failed and we were unable to recover it. 00:27:47.980 [2024-07-15 18:42:33.210014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.210043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.210348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.210380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.210651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.210681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.210958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.210988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.211180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.211210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.211475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.211505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.211704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.211735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.211914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.211944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.212159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.212189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.212367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.212398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.212595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.212625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.212903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.212933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.213141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.213171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.213424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.213456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.213717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.213747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.213961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.213991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.214258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.214289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.214595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.214627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.214838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.214869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.215118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.215148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.215465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.215496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.215766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.215796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.216091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.216121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.216328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.216367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.216641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.216671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.216955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.216985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.217241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.217272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.217570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.217602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.217901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.217930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.218208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.218238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.218435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.218465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.218613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.218643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.218911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.218941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.219197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.219227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.219510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.219540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.219797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.219825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.219948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.219977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.220231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.220261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.220482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.220513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.220696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.220727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.981 [2024-07-15 18:42:33.220995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.981 [2024-07-15 18:42:33.221025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.981 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.221213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.221243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.221440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.221473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.221663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.221694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.221889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.221920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.222206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.222236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.222514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.222545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.222765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.222796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.222938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.222968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.223185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.223216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.223485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.223516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.223791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.223821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.224000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.224030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.224232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.224263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.224393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.224425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.224620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.224649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.224869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.224899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.225099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.225128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.225378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.225410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.225682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.225712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.226008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.226037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.226243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.226273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.226524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.226555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.226817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.226845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.227068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.227098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.227296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.227326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.227587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.227623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.227873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.227902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.228151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.228181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.228458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.228490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.228691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.228721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.228976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.229005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.229359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.229390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.229679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.229709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.229983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.230012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.230207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.230236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.230416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.230449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.230580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.230610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.230877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.230906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.231098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.231128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.231355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.231387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.982 [2024-07-15 18:42:33.231563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.982 [2024-07-15 18:42:33.231593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.982 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.231818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.231848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.232039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.232069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.232265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.232295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.232580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.232610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.232845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.232874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.233176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.233205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.233478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.233509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.233631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.233660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.233913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.233942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.234193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.234222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.234477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.234509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.234722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.234751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.234981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.235012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.235289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.235319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.235586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.235618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.235832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.235862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.236122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.236152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.236353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.236385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.236654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.236684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.236864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.236894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.237086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.237116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.237395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.237427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.237678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.237707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.237976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.238005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.238206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.238236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.238441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.238477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.238748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.238779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.239026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.239056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.239256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.239286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.239519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.239553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.239735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.239765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.239955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.239985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.240259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.240289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.240493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.240525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.240780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.240811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.241057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.241087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.241282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.241313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.241529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.241560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.241767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.241797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.242017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.242048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.983 [2024-07-15 18:42:33.242265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.983 [2024-07-15 18:42:33.242294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.983 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.242483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.242514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.242721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.242751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.243020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.243050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.243322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.243363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.243595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.243625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.243827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.243857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.244130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.244159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.244400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.244431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.244562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.244593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.244876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.244906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.245129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.245158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.245433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.245471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.245702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.245731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.246030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.246060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.246257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.246286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.246471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.246503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.246682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.246712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.246929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.246960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.247160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.247190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.247468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.247499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.247777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.247807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.247959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.247989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.248289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.248320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.248457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.248487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.248686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.248717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.248913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.248944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.249192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.249223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.249364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.249396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.249659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.249689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.249957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.249988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.250167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.250197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.250472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.250504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.250730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.250760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.251036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.251066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.251361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.251393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.251594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.251625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.251809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.251840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.252041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.252072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.252360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.252391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.252536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.252566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.252761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.252791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.253063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.253093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.253382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.253414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.253644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.253674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.253924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.253954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.254220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.254250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.254546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.254578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.984 qpair failed and we were unable to recover it. 00:27:47.984 [2024-07-15 18:42:33.254782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.984 [2024-07-15 18:42:33.254813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.255062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.255091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.255273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.255303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.255608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.255639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.255908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.255939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.256244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.256281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.256492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.256523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.256800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.256830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.257113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.257143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.257434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.257466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.257685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.257715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.257987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.258017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.258315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.258364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.258566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.258597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.258869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.258900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.259104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.259135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.259388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.259418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.259557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.259587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.259859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.259890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.260101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.260131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.260261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.260290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.260572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.260604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.260855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.260885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.261110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.261140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.261419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.261451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.261728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.261758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.261955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.261985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.262250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.262281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.262444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.262475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.262610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.262640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.262917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.262947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.263148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.263178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.263380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.263421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.263696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.263727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.264058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.264088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.264287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.264317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.264446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.264477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.264755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.264787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.265061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.265090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.265292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.265323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.265525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.265556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.265750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.265781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.266041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.266070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.266260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.266290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.266510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.266542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.266809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.266839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.267034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.267065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.267364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.267397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.267645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.267678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.267932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.267963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.268213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.268243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.268474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.268506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.268753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.268783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.268984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.269015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.269299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.269332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.269613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.269643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.269924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.269954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.270207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.270237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.270502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.270534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.270676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.270706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.270962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.270993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.271270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.271300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.271506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.985 [2024-07-15 18:42:33.271538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.985 qpair failed and we were unable to recover it. 00:27:47.985 [2024-07-15 18:42:33.271753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.271784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.272036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.272066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.272265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.272295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.272503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.272535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.272740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.272770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.272975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.273006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.273158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.273189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.273402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.273434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.273649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.273681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.273814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.273844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.274097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.274133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.274413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.274445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.274585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.274615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.274810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.274839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.275067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.275096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.275355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.275388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.275521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.275552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.275819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.275850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.276151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.276181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.276386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.276417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.276667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.276697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.276890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.276920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.277040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.277070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.277382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.277414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.277648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.277679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.277867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.277898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.278120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.278151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.278398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.278431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.278698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.278727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.278928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.278958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.279168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.279198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.279358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.279390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.279533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.279563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.279855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.279885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.280010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.280042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.280266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.280295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.280575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.280607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.280894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.280931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.281131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.281160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.281417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.281448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.281696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.281727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.281940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.281970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.282245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.282274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.282472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.282503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.282702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.282733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.282875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.282905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.283055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.283085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.283359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.283390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.283581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.283611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.283802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.283832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.284060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.284090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.284285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.284316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.284481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.284513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.284708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.284738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.284866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.284896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.285186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.285216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.285422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.285454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.285721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.285751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.285954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.285984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.286172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.286201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.286469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.286501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.286747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.286778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.286983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.287013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.287209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.287239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.287512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.287543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.287775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.287805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.288132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.986 [2024-07-15 18:42:33.288163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.986 qpair failed and we were unable to recover it. 00:27:47.986 [2024-07-15 18:42:33.288438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.288469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.288668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.288697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.288838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.288867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.289000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.289030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.289232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.289264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.289471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.289502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.289748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.289778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.289921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.289953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.290173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.290203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.290475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.290509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.290644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.290674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.290821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.290857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.291020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.291051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.291378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.291409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.291589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.291618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.291817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.291847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.292100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.292130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.292388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.292419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.292538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.292567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.292716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.292745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.293036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.293066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.293353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.293385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.293585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.293615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.293801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.293831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.294070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.294100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.294380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.294413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.294560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.294592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.294791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.294821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.294976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.295005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.295256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.295286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.295584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.295616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.295816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.295845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.296119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.296149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.296361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.296394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.296628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.296657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.296881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.296910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.297114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.297143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.297361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.297393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.297693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.297723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.297868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.297897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.298080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.298111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.298372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.298404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.298656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.298686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.298841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.298871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.299160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.299190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.299487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.299519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.299747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.299777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.299978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.300008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.300190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.300220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.300489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.300521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.300706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.300737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.300868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.300898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.301101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.301132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.301410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.301442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.301692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.301723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.301933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.301964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.302269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.302300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.302594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.302626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.302777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.302806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.303113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.303144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.303419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.303451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.303674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.303704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.987 qpair failed and we were unable to recover it. 00:27:47.987 [2024-07-15 18:42:33.303844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.987 [2024-07-15 18:42:33.303874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.304081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.304111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.304390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.304422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.304629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.304660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.304948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.304978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.305247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.305277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.305553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.305585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.305776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.305806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.306087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.306118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.306304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.306334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.306543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.306573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.306827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.306857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.307105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.307135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.307347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.307378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.307556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.307587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.307855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.307886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.308185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.308214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.308415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.308453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.308671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.308701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.308878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.308908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.309173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.309203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.309403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.309434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.309579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.309610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.309933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.309964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.310239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.310270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.310554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.310587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.310842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.310873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.311118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.311149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.311378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.311410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.311602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.311632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.311883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.311913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.312137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.312168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.312421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.312452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.312589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.312619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.312801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.312830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.313077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.313107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.313404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.313435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.313626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.313658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.313859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.313888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.314137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.314168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.314471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.314504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.314713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.314745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.314960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.314991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.315187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.315216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.315400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.315431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.315662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.315692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.315917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.315947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.316223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.316254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.316503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.316534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.316715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.316745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.317039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.317069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.317350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.317381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.317674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.317706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.317900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.317930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.318036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.318066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.318373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.318406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.318559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.318592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.318848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.318878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.319097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.319135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.319405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.319436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.319670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.319700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.319968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.319998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.320218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.320248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.320534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.320566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.320815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.320846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.988 qpair failed and we were unable to recover it. 00:27:47.988 [2024-07-15 18:42:33.321046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.988 [2024-07-15 18:42:33.321075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.321259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.321291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.321411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.321442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.321717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.321747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.321957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.321987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.322254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.322283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.322554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.322587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.322796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.322827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.323029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.323058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.323349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.323380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.323572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.323602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.323820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.323851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.324151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.324182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.324454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.324485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.324702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.324735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.324929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.324960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.325157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.325188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.325463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.325494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.325675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.325705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.325923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.325954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.326108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.326143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.326361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.326393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.326657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.326687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.326871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.326901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.327154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.327183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.327306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.327348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.327597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.327628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.327875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.327905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.328045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.328074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.328325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.328366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.328624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.328655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.328830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.328860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.329147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.329178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.329380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.329414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.329662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.329691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.329964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.329995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.330127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.330157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.330372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.330404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.330661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.330691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.330888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.330918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.331110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.331139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.331394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.331424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.331640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.331670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.331884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.331914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.332128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.332159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.332360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.332391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.332647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.332676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.332954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.332984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.333275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.333307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.333525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.333556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.333807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.333837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.333977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.334007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.334188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.334218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.334450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.334481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.334712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.334743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.334879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.334911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.335186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.335216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.335503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.335536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.335823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.335853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.336141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.336174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.336376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.336407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.336608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.336644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.336784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.336814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.337033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.337063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.337240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.337269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.337470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.337501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.337690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.337720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.337913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.989 [2024-07-15 18:42:33.337942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.989 qpair failed and we were unable to recover it. 00:27:47.989 [2024-07-15 18:42:33.338231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.338261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.338404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.338436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.338626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.338656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.338806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.338836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.339098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.339130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.339400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.339431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.339632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.339664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.339803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.339835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.340088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.340118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.340248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.340278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.340496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.340527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.340713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.340743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.340945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.340976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.341235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.341264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.341445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.341476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.341692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.341722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.341944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.341973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.342129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.342159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.342364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.342395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.342666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.342696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.343003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.343039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.343336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.343411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.343724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.343756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.343967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.343997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.344226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.344255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.344459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.344490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.344690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.344720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.344899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.344929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.345110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.345139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.345332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.345373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.345602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.345632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.345789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.345818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.346114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.346145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.346445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.346479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.346717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.346749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.346950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.346980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.347178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.347207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.347492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.347524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.347685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.347715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.347913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.347942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.348140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.348169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.348448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.348480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.348621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.348651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.348867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.348896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.349203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.349233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.349425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.349456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.349657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.349687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.349824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.349853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.350056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.350087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.350302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.350332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.350602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.350633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.350925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.350955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.351233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.351262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.351468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.351499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.351771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.351801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.351940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.351970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.352146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.352176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.352366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.352397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.352656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.352685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.352805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.352834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.990 [2024-07-15 18:42:33.353041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.990 [2024-07-15 18:42:33.353071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.990 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.353358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.353395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.353600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.353630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.353822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.353852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.354146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.354176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.354474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.354506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.354778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.354809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.355025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.355056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.355247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.355277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.355561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.355592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.355807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.355836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.356094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.356123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.356393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.356424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.356621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.356651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.356841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.356871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.357070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.357099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.357309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.357348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.357553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.357583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.357775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.357804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.357995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.358024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.358211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.358241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.358437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.358468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.358669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.358698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.358822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.358852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.359060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.359090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.359363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.359393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.359599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.359629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.359819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.359848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.360084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.360120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.360251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.360281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.360582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.360613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.360853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.360883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.361221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.361251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.361535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.361566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.361786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.361815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.361968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.361998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.362140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.362170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.362379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.362410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.362545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.362577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.362775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.362805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.362987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.363017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.363288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.363318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.363515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.363546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.363745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.363774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.363956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.363986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.364184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.364214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.364428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.364458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.364710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.364740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.364885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.364914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.365106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.365136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.365332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.365376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.365525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.365554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.365773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.365802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.366040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.366069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.366359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.366389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.366577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.366606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.366778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.366808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.367096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.367125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.367316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.367375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.367574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.367604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.367748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.367777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.367975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.368005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.368187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.368216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.368408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.368440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.368649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.368678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.368820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.368849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.369067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.369096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.369302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.369331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.369590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.369620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.991 qpair failed and we were unable to recover it. 00:27:47.991 [2024-07-15 18:42:33.369772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.991 [2024-07-15 18:42:33.369806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.370066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.370097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.370359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.370391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.370571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.370601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.370793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.370823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.371126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.371155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.371284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.371314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.371480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.371512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.371667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.371696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.371945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.371974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.372176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.372206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.372403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.372434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.372576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.372606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.372806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.372836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.373122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.373153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.373420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.373451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.373644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.373674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.373943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.373973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.374153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.374183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.374388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.374420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.374552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.374582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.374820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.374850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.375057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.375087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.375335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.375376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.375516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.375547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.375686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.375716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.375891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.375920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.376143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.376173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.376429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.376460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.376608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.376638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.376768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.376798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.376987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.377019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.377230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.377260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.377548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.377579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.377714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.377744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.377945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.377976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.378186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.378217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.378413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.378444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.378651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.378681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.378801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.378830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.378977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.379007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.379308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.379362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.379509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.379540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.379743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.379773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.380055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.380085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.380368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.380402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.380609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.380641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.380890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.380919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.381261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.381290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.381535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.381567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.381746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.381776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.381972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.382002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.382220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.382250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.382526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.382557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.382690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.382719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.382857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.382888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.383090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.383119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.383437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.383469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.383596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.383626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.383824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.383854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.384073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.384104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.384326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.384370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.384675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.384704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.992 [2024-07-15 18:42:33.384840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.992 [2024-07-15 18:42:33.384869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.992 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.385241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.385271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.385426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.385458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.385609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.385639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.385837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.385867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.386148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.386184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.386483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.386515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.386659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.386689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.386894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.386924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.387179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.387209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.387405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.387436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.387581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.387612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.387760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.387790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.387928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.387958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.388136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.388166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.388455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.388487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.388621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.388652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.388793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.388823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.389130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.389160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.389394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.389426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.389621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.389651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.389844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.389873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.390074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.390104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.390380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.390411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.390547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.390576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.390776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.390810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.391091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.391119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.391381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.391412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.391543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.391573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.391724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.391754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.391910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.391942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.392237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.392267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.392517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.392548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.392676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.392707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.392912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.392945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.393131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.393160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.393362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.393393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.393576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.393605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.393721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.393751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.394009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.394038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.394242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.394271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.394478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.394510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.394654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.394684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.394821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.394850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.395152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.395184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.395383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.395414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.395573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.395605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.395786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.395815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.395951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.395981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.396258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.396288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.396454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.396485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.396705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.396736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.396890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.396919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.397182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.397211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.397447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.397477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.397623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.397653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.397876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.397907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.398116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.398146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.398265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.398297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.398531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.398563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.398752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.398783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.399068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.399098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.399375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.399407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.399670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.399701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.399856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.399887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.400145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.400175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.400435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.400466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.400693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.400723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.400989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.401018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.993 [2024-07-15 18:42:33.401212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.993 [2024-07-15 18:42:33.401243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.993 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.401449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.401480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.401633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.401662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.401846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.401876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.402155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.402191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.402459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.402489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.402640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.402669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.402850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.402880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.403202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.403232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.403426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.403458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.403613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.403644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.403826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.403855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.404254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.404284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.404479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.404510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.404708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.404737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.404932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.404961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.405141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.405170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.405367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.405398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.405587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.405616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.405813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.405843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.406132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.406162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.406416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.406447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.406673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.406702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.406844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.406873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.407008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.407037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.407224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.407253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.407448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.407480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.407729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.407760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.407966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.407996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.408268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.408298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.408461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.408491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.408704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.408734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.408876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.408906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.409126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.409155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.409429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.409460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.409711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.409741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.409975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.410004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.410261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.410291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.410560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.410591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.410725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.410755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.410960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.410989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.411186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.411215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.411423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.411454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.411604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.411634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.411756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.411786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.412108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.412144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.412350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.412381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.412574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.412605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.412812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.412843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.413145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.413175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.413359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.413390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.413526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.413556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.413832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.413862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.414112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.414143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.414365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.414397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.414549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.414579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.414864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.414894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.415095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.415125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.415396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.415428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.415717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.415748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.415875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.415905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.416177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.416208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.416446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.416477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.416628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.416658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.416910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.416939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.417158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.417189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.417321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.417383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.417637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.417667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.417931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.417963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.418171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.994 [2024-07-15 18:42:33.418202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.994 qpair failed and we were unable to recover it. 00:27:47.994 [2024-07-15 18:42:33.418396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.418427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.418548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.418578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.418775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.418812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.419132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.419162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.419364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.419395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.419585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.419615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.419824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.419855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.420185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.420216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.420495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.420527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.420677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.420707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.420819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.420850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.420973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.421003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.421248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.421279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.421502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.421534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.421772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.421802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.421932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.421962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.422091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.422121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.422325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.422380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.422578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.422609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.422747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.422777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.422983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.423013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.423281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.423312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.423637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.423669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.423810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.423840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.424133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.424163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.424410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.424441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.424577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.424607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.424855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.424885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.425108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.425139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.425326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.425368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.425505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.425535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.425678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.425708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.425831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.425861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.426064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.426093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.426224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.426254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.426444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.426475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.426772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.426804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.427008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.427036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.427224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.427253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.427448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.427480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.427687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.427716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.427861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.427890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.428024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.428054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.428254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.428290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.428520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.428552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.428734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.428764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.429014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.429043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.429317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.429363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.429560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.429589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.429734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.429764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.430043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.430072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.430367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.430399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.430679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.430709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.430959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.430989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.431202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.431232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.431368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.431401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.431549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.431580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.431862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.431892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.432014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.432045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.432310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.432349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.432549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.432579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.432708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.432739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.995 [2024-07-15 18:42:33.432935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.995 [2024-07-15 18:42:33.432965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.995 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.433231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.433261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.433512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.433543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.433689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.433720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.433838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.433868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.434007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.434037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.434170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.434200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.434390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.434422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.434602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.434638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.434823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.434852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.435053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.435083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.435213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.435242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.435426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.435457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.435591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.435620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.435819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.435848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.436093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.436123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.436299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.436329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.436544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.436575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.436685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.436715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.436962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.436991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.437170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.437199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.437330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.437373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.437546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.437621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.437829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.437863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.438052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.438083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.438305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.438350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.438619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.438650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.438872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.438902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.439120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.439151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.439326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.439369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.439553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.439584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.439862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.439892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.440037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.440067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.440191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.440222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.440501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.440532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.440811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.440850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.441039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.441069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.441184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.441213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.441395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.441425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.441704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.441734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.441867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.441897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.442085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.442115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.442248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.442279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.442540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.442571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.442702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.442731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.442916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.442945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.443170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.443200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.443317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.443358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.443581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.443612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.443840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.443870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.444063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.444094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.444299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.444328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.444576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.444606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.444789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.444820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.444940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.444969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.445151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.445181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.445367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.445398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.445653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.445683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.445799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.445829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.446006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.446035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.446244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.446274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.446411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.446441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.446746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.446823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.446959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.446992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.447181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.447212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.447458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.447495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.447636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.447667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.447945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.447976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.448104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.448135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.996 qpair failed and we were unable to recover it. 00:27:47.996 [2024-07-15 18:42:33.448350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.996 [2024-07-15 18:42:33.448382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.448561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.448591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.448789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.448819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.449012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.449042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.449165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.449195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.449385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.449416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.449536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.449566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.449707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.449737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.449847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.449877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.450007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.450037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.450288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.450318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.450444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.450476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.450699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.450729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.450941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.450971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.451171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.451201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.451325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.451369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.451674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.451704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.451995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.452025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.452232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.452262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.452458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.452489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.452689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.452725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.452919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.452949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.453070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.453100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.453360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.453392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.453507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.453538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.453659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.453689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.453973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.454004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.454124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.454154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.454331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.454376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.454624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.454655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.454779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.454809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.454988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.455018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.455226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.455256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.455479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.455511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.455809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.455839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.456029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.456059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.456252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.456282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.456421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.456453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.456660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.456690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.456940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.456970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.457153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.457183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.457304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.457335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.457462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.457492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.457670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.457701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.457898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.457928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.458185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.458215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.458397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.458429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.458622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.458652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.458805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.458836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.459016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.459046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.459226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.459257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.459437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.459468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.459719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.459749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.459885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.459915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.460090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.460120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.460311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.460362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.460613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.460644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.460857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.460887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.461132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.461162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.461292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.461322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.461468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.461499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.461660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.461735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.461938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.461973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.462173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.462204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.462388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.462421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.462622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.462652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.462851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.462880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.463062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.463092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.463230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.463260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.997 qpair failed and we were unable to recover it. 00:27:47.997 [2024-07-15 18:42:33.463483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.997 [2024-07-15 18:42:33.463513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.463691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.463720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.463919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.463949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.464130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.464159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.464273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.464302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.464527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.464567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.464694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.464723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.464924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.464953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.465162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.465191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.465367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.465398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.465671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.465701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.465839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.465868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.466133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.466163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.466358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.466388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.466614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.466644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.466837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.466867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.466991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.467019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.467214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.467244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.467386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.467417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.467570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.467600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.467846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.467876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.467996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.468025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.468211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.468240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.468451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.468481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.468687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.468717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.468917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.468946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.469128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.469157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.469367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.469397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.469533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.469563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.469749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.469780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.469897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.469926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.470053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.470082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.470261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.470336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.470589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.470625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.470827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.470858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.471045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.471074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.471253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.471283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.471616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.471649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.471867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.471897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.472093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.472122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.472353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.472384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.472563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.472593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.472732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.472762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.472942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.472972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.473189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.473218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.473351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.473382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.473528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.473559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.473689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.473719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.473963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.473993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.474133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.474162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.474265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.474295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.474436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.474467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.474738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.474768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.474877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.474906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.475098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.475126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.475390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.475423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.475613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.475642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.475831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.475861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.475997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.476027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.476217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.476253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.476384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.476414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.476551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.998 [2024-07-15 18:42:33.476579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.998 qpair failed and we were unable to recover it. 00:27:47.998 [2024-07-15 18:42:33.476703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.476733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.476988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.477017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.477137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.477167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.477368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.477400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.477537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.477566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.477758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.477787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.478051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.478081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.478258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.478287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.478470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.478500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.478633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.478662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.478788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.478817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.479021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.479051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.479276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.479305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.479457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.479488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.479669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.479698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.479816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.479845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.479977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.480006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.480184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.480213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.480391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.480422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.480630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.480659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.480773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.480803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.480917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.480947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.481065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.481095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.481220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.481249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.481542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.481572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.481701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.481731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.481859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.481888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.482091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.482121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.482231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.482261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.482376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.482407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.482654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.482683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.482816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.482845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.482971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.483000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.483119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.483149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.483270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.483300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.483425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.483456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.483626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.483656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.483837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.483866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.484057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.484092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.484264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.484293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.484481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.484512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.484635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.484665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.484784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.484813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.485005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.485034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.485229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.485258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.485451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.485482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.485605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.485634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.485810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.485839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.486037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.486066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.486252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.486281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.486427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.486458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.486571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.486600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.486868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.486897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.487084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.487113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.487289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.487318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.487472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.487503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.487619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.487649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.487895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.487925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.488043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.488073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.488244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.488274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.488390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.488421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.488557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.488587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.488759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.488788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.488965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.488994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.489110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.489140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.489242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.489277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.489378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.489409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.489530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.489559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.489798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.999 [2024-07-15 18:42:33.489827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:47.999 qpair failed and we were unable to recover it. 00:27:47.999 [2024-07-15 18:42:33.489940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.489979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.490087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.490117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.490252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.490282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.490414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.490445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.490553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.490583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.490697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.490727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.490916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.490946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.491078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.491107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.491216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.491246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.491376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.491407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.491685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.491715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.491834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.491863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.492012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.492041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.492233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.492262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.492446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.492476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.492597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.492625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.492743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.492772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.492899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.492928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.493121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.493150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.493256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.493285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.493557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.493588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.493720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.493749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.493925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.493954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.494071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.494100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.494284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.494314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.494437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.494468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.494742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.494772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.494892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.494922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.495157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.495186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.495379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.495410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.495534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.495564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.495757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.495787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.495914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.495943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.496077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.496106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.496239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.496269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.496383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.496413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.496589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.496619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.496727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.496763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.496896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.496925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.497171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.497200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.497387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.497417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.497525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.497572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.497694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.497723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.497832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.497861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.497988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.498018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.498124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.498154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.498352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.498383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.498494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.498524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.498652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.498681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.498888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.498917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.499109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.499138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.499263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.499293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.499449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.499480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.499600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.499629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.499743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.499772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.499953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.499982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.500155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.500184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.500290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.500319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.500461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.500491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.500777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.500807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.500983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.501013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.501226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.501256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.501401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.501432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-15 18:42:33.501646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.000 [2024-07-15 18:42:33.501675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.501806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.501841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.501953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.501983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.502101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.502130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.502314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.502356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.502551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.502581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.502764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.502794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.502935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.502965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.503098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.503128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.503247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.503278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.503536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.503566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.503768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.503798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.503937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.503967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.504148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.504178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.504369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.504401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.504519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.504549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.504720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.504750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.504887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.504916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.505099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.505128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.505301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.505330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.505461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.505491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.505605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.505635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.505745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.505775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.505909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.505938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.506124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.506153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.506336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.506380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.506564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.506594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.506702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.506731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.506942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.506971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.507207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.507236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.507383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.507414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.507543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.507574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.507818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.507846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.507958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.507988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.508186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.508215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.508358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.508389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.508507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.508536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.508718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.508747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.508851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.508880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.508991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.509020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.509192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.509221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.509526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.509557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.509685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.509720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.509836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.509865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.509991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.510020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.510187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.510217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.510361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.510393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.510517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.510545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.510732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.510761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.510877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.510907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.511039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.511068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.511235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.511264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.511388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.511419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.511593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.511623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.511740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.511769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.511888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.511917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.512099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.512132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.512236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.512265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.512403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.512434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.512549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.512578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.512752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.512781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.512896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.512925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.513165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.513194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.513326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.513392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.513514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.513544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.513665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.513694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.513799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.513829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.513936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.513965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.514091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.001 [2024-07-15 18:42:33.514120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-15 18:42:33.514226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.002 [2024-07-15 18:42:33.514261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.002 qpair failed and we were unable to recover it. 00:27:48.002 [2024-07-15 18:42:33.514363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.002 [2024-07-15 18:42:33.514394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.002 qpair failed and we were unable to recover it. 00:27:48.002 [2024-07-15 18:42:33.514507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.002 [2024-07-15 18:42:33.514537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.002 qpair failed and we were unable to recover it. 00:27:48.002 [2024-07-15 18:42:33.514731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.002 [2024-07-15 18:42:33.514760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.002 qpair failed and we were unable to recover it. 00:27:48.002 [2024-07-15 18:42:33.514967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.002 [2024-07-15 18:42:33.514997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.002 qpair failed and we were unable to recover it. 00:27:48.002 [2024-07-15 18:42:33.515109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.002 [2024-07-15 18:42:33.515138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.002 qpair failed and we were unable to recover it. 00:27:48.002 [2024-07-15 18:42:33.515332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.002 [2024-07-15 18:42:33.515376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.002 qpair failed and we were unable to recover it. 00:27:48.002 [2024-07-15 18:42:33.515495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.002 [2024-07-15 18:42:33.515524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.002 qpair failed and we were unable to recover it. 00:27:48.002 [2024-07-15 18:42:33.515631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.002 [2024-07-15 18:42:33.515660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.002 qpair failed and we were unable to recover it. 00:27:48.002 [2024-07-15 18:42:33.515766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.002 [2024-07-15 18:42:33.515796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.002 qpair failed and we were unable to recover it. 00:27:48.002 [2024-07-15 18:42:33.515976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.002 [2024-07-15 18:42:33.516005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.002 qpair failed and we were unable to recover it. 00:27:48.002 [2024-07-15 18:42:33.516199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.002 [2024-07-15 18:42:33.516227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.002 qpair failed and we were unable to recover it. 00:27:48.002 [2024-07-15 18:42:33.516475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.002 [2024-07-15 18:42:33.516506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.002 qpair failed and we were unable to recover it. 00:27:48.002 [2024-07-15 18:42:33.516635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.002 [2024-07-15 18:42:33.516663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.002 qpair failed and we were unable to recover it. 00:27:48.002 [2024-07-15 18:42:33.516850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.002 [2024-07-15 18:42:33.516921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.002 qpair failed and we were unable to recover it. 00:27:48.002 [2024-07-15 18:42:33.517190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.002 [2024-07-15 18:42:33.517230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.002 qpair failed and we were unable to recover it. 00:27:48.002 [2024-07-15 18:42:33.517366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.002 [2024-07-15 18:42:33.517406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.002 qpair failed and we were unable to recover it. 00:27:48.002 [2024-07-15 18:42:33.517525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.002 [2024-07-15 18:42:33.517556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.002 qpair failed and we were unable to recover it. 00:27:48.002 [2024-07-15 18:42:33.517687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.002 [2024-07-15 18:42:33.517717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.002 qpair failed and we were unable to recover it. 00:27:48.002 [2024-07-15 18:42:33.517849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.002 [2024-07-15 18:42:33.517879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.002 qpair failed and we were unable to recover it. 00:27:48.283 [2024-07-15 18:42:33.518055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.283 [2024-07-15 18:42:33.518085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.283 qpair failed and we were unable to recover it. 00:27:48.283 [2024-07-15 18:42:33.518274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.518304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.518490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.518522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.518639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.518669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.518773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.518803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.518932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.518963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.519082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.519112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.519224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.519263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.519447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.519479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.519688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.519719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.519980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.520011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.520163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.520194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.520395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.520433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.520680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.520710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.520978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.521009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.521196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.521226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.521369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.521401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.521519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.521551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.521689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.521719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.521854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.521884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.522009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.522040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.522233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.522265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.522445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.522478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.522675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.522704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.522974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.523006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.523128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.523158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.523276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.523305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.523430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.523465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.523709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.523738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.523934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.523963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.524079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.524109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.524275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.524305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.524576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.524607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.524737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.524766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.524994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.525063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.525254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.525288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.525420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.525453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.284 [2024-07-15 18:42:33.525645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.284 [2024-07-15 18:42:33.525675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.284 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.525855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.525884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.526060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.526090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.526299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.526328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.526452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.526483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.526594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.526623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.526809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.526839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.527015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.527045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.527309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.527348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.527488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.527518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.527642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.527672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.527800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.527829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.527997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.528027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.528239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.528269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.528387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.528418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.528524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.528553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.528671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.528700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.528821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.528850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.529020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.529055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.529178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.529207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.529330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.529370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.529486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.529516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.529691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.529722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.529855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.529884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.530023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.530058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.530269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.530299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.530477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.530507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.530622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.530652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.530793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.530822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.530993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.531021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.531225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.531254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.531423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.531453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.531623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.531653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.531770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.531799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.532047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.532077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.532196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.532226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.532425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.532456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.532698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.532727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.532917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.532947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.533186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.533215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.533389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.533420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.533532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.285 [2024-07-15 18:42:33.533561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.285 qpair failed and we were unable to recover it. 00:27:48.285 [2024-07-15 18:42:33.533742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.533772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.533886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.533916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.534102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.534131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.534306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.534336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.534462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.534491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.534729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.534759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.534945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.534974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.535092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.535122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.535304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.535334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.535475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.535505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.535622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.535653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.535779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.535808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.535941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.535970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.536093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.536123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.536295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.536324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.536515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.536545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.536738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.536768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.536952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.536982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.537094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.537124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.537370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.537402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.537607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.537636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.537766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.537795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.537909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.537938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.538114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.538144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.538266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.538296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.538500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.538531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.538636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.538665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.538853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.538883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.539104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.539133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.539348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.539379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.539568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.539599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.539766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.539794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.539909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.539938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.540120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.540149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.540259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.540288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.540407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.286 [2024-07-15 18:42:33.540438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.286 qpair failed and we were unable to recover it. 00:27:48.286 [2024-07-15 18:42:33.540619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.540648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.540786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.540816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.541006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.541035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.541147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.541176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.541354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.541386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.541557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.541586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.541707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.541736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.541911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.541940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.542141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.542170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.542279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.542308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.542471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.542501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.542629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.542659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.542829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.542858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.543028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.543058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.543171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.543207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.543318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.543360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.543492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.543522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.543653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.543683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.543886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.543916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.544027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.544056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.544223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.544253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.544377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.544409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.544541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.544570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.544691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.544719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.544893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.544922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.545184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.545213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.545333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.545374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.545484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.545514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.545645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.545675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.545846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.545875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.546010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.546039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.546214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.546242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.546369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.546399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.287 qpair failed and we were unable to recover it. 00:27:48.287 [2024-07-15 18:42:33.546523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.287 [2024-07-15 18:42:33.546553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.288 qpair failed and we were unable to recover it. 00:27:48.288 [2024-07-15 18:42:33.546732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.288 [2024-07-15 18:42:33.546761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.288 qpair failed and we were unable to recover it. 00:27:48.288 [2024-07-15 18:42:33.546876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.288 [2024-07-15 18:42:33.546905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.288 qpair failed and we were unable to recover it. 00:27:48.288 [2024-07-15 18:42:33.547014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.288 [2024-07-15 18:42:33.547043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.288 qpair failed and we were unable to recover it. 00:27:48.288 [2024-07-15 18:42:33.547228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.288 [2024-07-15 18:42:33.547257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.288 qpair failed and we were unable to recover it. 00:27:48.288 [2024-07-15 18:42:33.547372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.288 [2024-07-15 18:42:33.547402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.288 qpair failed and we were unable to recover it. 00:27:48.288 [2024-07-15 18:42:33.547507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.288 [2024-07-15 18:42:33.547537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.288 qpair failed and we were unable to recover it. 00:27:48.288 [2024-07-15 18:42:33.547797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.288 [2024-07-15 18:42:33.547826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.288 qpair failed and we were unable to recover it. 00:27:48.288 [2024-07-15 18:42:33.547999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.288 [2024-07-15 18:42:33.548028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.288 qpair failed and we were unable to recover it. 00:27:48.288 [2024-07-15 18:42:33.548292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.288 [2024-07-15 18:42:33.548322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.288 qpair failed and we were unable to recover it. 00:27:48.288 [2024-07-15 18:42:33.548529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.288 [2024-07-15 18:42:33.548559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.288 qpair failed and we were unable to recover it. 00:27:48.288 [2024-07-15 18:42:33.548674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.288 [2024-07-15 18:42:33.548703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.288 qpair failed and we were unable to recover it. 00:27:48.288 [2024-07-15 18:42:33.548803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.288 [2024-07-15 18:42:33.548833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.288 qpair failed and we were unable to recover it. 00:27:48.288 [2024-07-15 18:42:33.548932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.288 [2024-07-15 18:42:33.548961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.288 qpair failed and we were unable to recover it. 00:27:48.288 [2024-07-15 18:42:33.549064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.288 [2024-07-15 18:42:33.549094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.288 qpair failed and we were unable to recover it. 00:27:48.288 [2024-07-15 18:42:33.549271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.288 [2024-07-15 18:42:33.549301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.288 qpair failed and we were unable to recover it. 00:27:48.288 [2024-07-15 18:42:33.549487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.288 [2024-07-15 18:42:33.549518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.288 qpair failed and we were unable to recover it. 00:27:48.288 [2024-07-15 18:42:33.549718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.288 [2024-07-15 18:42:33.549749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.288 qpair failed and we were unable to recover it. 00:27:48.288 [2024-07-15 18:42:33.549884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.288 [2024-07-15 18:42:33.549914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.288 qpair failed and we were unable to recover it. 00:27:48.288 [2024-07-15 18:42:33.550088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.288 [2024-07-15 18:42:33.550117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.288 qpair failed and we were unable to recover it. 00:27:48.288 [2024-07-15 18:42:33.550325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.288 [2024-07-15 18:42:33.550366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.288 qpair failed and we were unable to recover it. 00:27:48.288 [2024-07-15 18:42:33.550550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.288 [2024-07-15 18:42:33.550579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.288 qpair failed and we were unable to recover it. 00:27:48.288 [2024-07-15 18:42:33.550686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.288 [2024-07-15 18:42:33.550723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.288 qpair failed and we were unable to recover it. 00:27:48.288 [2024-07-15 18:42:33.550853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.288 [2024-07-15 18:42:33.550883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.288 qpair failed and we were unable to recover it. 00:27:48.288 [2024-07-15 18:42:33.550994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.288 [2024-07-15 18:42:33.551024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.289 qpair failed and we were unable to recover it. 00:27:48.289 [2024-07-15 18:42:33.551210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.289 [2024-07-15 18:42:33.551240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.289 qpair failed and we were unable to recover it. 00:27:48.289 [2024-07-15 18:42:33.551362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.289 [2024-07-15 18:42:33.551420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.289 qpair failed and we were unable to recover it. 00:27:48.289 [2024-07-15 18:42:33.551600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.289 [2024-07-15 18:42:33.551629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.289 qpair failed and we were unable to recover it. 00:27:48.289 [2024-07-15 18:42:33.551843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.289 [2024-07-15 18:42:33.551872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.289 qpair failed and we were unable to recover it. 00:27:48.289 [2024-07-15 18:42:33.552002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.289 [2024-07-15 18:42:33.552031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.289 qpair failed and we were unable to recover it. 00:27:48.289 [2024-07-15 18:42:33.552141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.289 [2024-07-15 18:42:33.552169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.289 qpair failed and we were unable to recover it. 00:27:48.289 [2024-07-15 18:42:33.552288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.289 [2024-07-15 18:42:33.552317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.289 qpair failed and we were unable to recover it. 00:27:48.289 [2024-07-15 18:42:33.552610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.289 [2024-07-15 18:42:33.552641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.289 qpair failed and we were unable to recover it. 00:27:48.289 [2024-07-15 18:42:33.552838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.289 [2024-07-15 18:42:33.552867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.289 qpair failed and we were unable to recover it. 00:27:48.289 [2024-07-15 18:42:33.553113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.289 [2024-07-15 18:42:33.553142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.289 qpair failed and we were unable to recover it. 00:27:48.289 [2024-07-15 18:42:33.553312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.289 [2024-07-15 18:42:33.553354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.289 qpair failed and we were unable to recover it. 00:27:48.289 [2024-07-15 18:42:33.553472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.289 [2024-07-15 18:42:33.553503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.289 qpair failed and we were unable to recover it. 00:27:48.289 [2024-07-15 18:42:33.553680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.289 [2024-07-15 18:42:33.553709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.289 qpair failed and we were unable to recover it. 00:27:48.289 [2024-07-15 18:42:33.553832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.289 [2024-07-15 18:42:33.553861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.289 qpair failed and we were unable to recover it. 00:27:48.289 [2024-07-15 18:42:33.554059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.289 [2024-07-15 18:42:33.554088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.289 qpair failed and we were unable to recover it. 00:27:48.289 [2024-07-15 18:42:33.554213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.289 [2024-07-15 18:42:33.554242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.289 qpair failed and we were unable to recover it. 00:27:48.289 [2024-07-15 18:42:33.554482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.289 [2024-07-15 18:42:33.554514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.289 qpair failed and we were unable to recover it. 00:27:48.289 [2024-07-15 18:42:33.554625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.289 [2024-07-15 18:42:33.554654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.289 qpair failed and we were unable to recover it. 00:27:48.289 [2024-07-15 18:42:33.554771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.289 [2024-07-15 18:42:33.554800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.289 qpair failed and we were unable to recover it. 00:27:48.289 [2024-07-15 18:42:33.554968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.289 [2024-07-15 18:42:33.554998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.289 qpair failed and we were unable to recover it. 00:27:48.289 [2024-07-15 18:42:33.555246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.289 [2024-07-15 18:42:33.555275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.289 qpair failed and we were unable to recover it. 00:27:48.289 [2024-07-15 18:42:33.555458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.289 [2024-07-15 18:42:33.555489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.289 qpair failed and we were unable to recover it. 00:27:48.289 [2024-07-15 18:42:33.555609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.289 [2024-07-15 18:42:33.555638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.289 qpair failed and we were unable to recover it. 00:27:48.289 [2024-07-15 18:42:33.555752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.289 [2024-07-15 18:42:33.555781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.289 qpair failed and we were unable to recover it. 00:27:48.289 [2024-07-15 18:42:33.555955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.289 [2024-07-15 18:42:33.555990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.289 qpair failed and we were unable to recover it. 00:27:48.289 [2024-07-15 18:42:33.556102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.289 [2024-07-15 18:42:33.556131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.289 qpair failed and we were unable to recover it. 00:27:48.289 [2024-07-15 18:42:33.556376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.289 [2024-07-15 18:42:33.556407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.289 qpair failed and we were unable to recover it. 00:27:48.289 [2024-07-15 18:42:33.556523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.289 [2024-07-15 18:42:33.556552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.289 qpair failed and we were unable to recover it. 00:27:48.289 [2024-07-15 18:42:33.556659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.289 [2024-07-15 18:42:33.556687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.289 qpair failed and we were unable to recover it. 00:27:48.289 [2024-07-15 18:42:33.556879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.289 [2024-07-15 18:42:33.556908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.289 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.557132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.557161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.557278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.557308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.557466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.557496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.557619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.557648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.557813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.557842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.558095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.558123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.558362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.558393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.558525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.558554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.558672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.558702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.558884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.558913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.559100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.559129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.559414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.559444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.559622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.559651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.559762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.559791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.559907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.559936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.560161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.560189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.560300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.560330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.560604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.560635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.560801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.560831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.560937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.560966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.561098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.561128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.561238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.561267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.561400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.561431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.561550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.561580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.561755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.561785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.561901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.561930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.562169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.562200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.562327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.562369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.562547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.562576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.562791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.562821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.562935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.562964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.563087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.563117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.563242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.563273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.563535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.563566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.563686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.563716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.563843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.563883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.564163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.290 [2024-07-15 18:42:33.564193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.290 qpair failed and we were unable to recover it. 00:27:48.290 [2024-07-15 18:42:33.564453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.564484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.564612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.564642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.564820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.564850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.565019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.565049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.565173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.565203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.565377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.565408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.565612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.565641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.565762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.565791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.565914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.565944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.566042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.566072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.566285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.566315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.566510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.566540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.566743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.566773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.566945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.566975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.567109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.567139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.567246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.567276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.567459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.567490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.567672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.567702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.567878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.567907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.568027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.568056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.568312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.568360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.568535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.568565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.568744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.568774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.568978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.569008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.569196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.569226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.569357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.569394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.569583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.569612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.569729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.569758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.569935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.569964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.570153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.570182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.570359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.570389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.570514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.570544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.570668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.570698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.570808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.570837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.571041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.571072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.571197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.571226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.291 [2024-07-15 18:42:33.571465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.291 [2024-07-15 18:42:33.571496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.291 qpair failed and we were unable to recover it. 00:27:48.292 [2024-07-15 18:42:33.571697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-15 18:42:33.571726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.292 qpair failed and we were unable to recover it. 00:27:48.292 [2024-07-15 18:42:33.571928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-15 18:42:33.571957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.292 qpair failed and we were unable to recover it. 00:27:48.292 [2024-07-15 18:42:33.572083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-15 18:42:33.572113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.292 qpair failed and we were unable to recover it. 00:27:48.292 [2024-07-15 18:42:33.572357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-15 18:42:33.572388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.292 qpair failed and we were unable to recover it. 00:27:48.292 [2024-07-15 18:42:33.572561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-15 18:42:33.572591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.292 qpair failed and we were unable to recover it. 00:27:48.292 [2024-07-15 18:42:33.572760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-15 18:42:33.572791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.292 qpair failed and we were unable to recover it. 00:27:48.292 [2024-07-15 18:42:33.572907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-15 18:42:33.572937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.292 qpair failed and we were unable to recover it. 00:27:48.292 [2024-07-15 18:42:33.573054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-15 18:42:33.573084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.292 qpair failed and we were unable to recover it. 00:27:48.292 [2024-07-15 18:42:33.573211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-15 18:42:33.573241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.292 qpair failed and we were unable to recover it. 00:27:48.292 [2024-07-15 18:42:33.573480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-15 18:42:33.573510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.292 qpair failed and we were unable to recover it. 00:27:48.292 [2024-07-15 18:42:33.573639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-15 18:42:33.573669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.292 qpair failed and we were unable to recover it. 00:27:48.292 [2024-07-15 18:42:33.573904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-15 18:42:33.573934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.292 qpair failed and we were unable to recover it. 00:27:48.292 [2024-07-15 18:42:33.574042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-15 18:42:33.574072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.292 qpair failed and we were unable to recover it. 00:27:48.292 [2024-07-15 18:42:33.574188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-15 18:42:33.574217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.292 qpair failed and we were unable to recover it. 00:27:48.292 [2024-07-15 18:42:33.574321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-15 18:42:33.574360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.292 qpair failed and we were unable to recover it. 00:27:48.292 [2024-07-15 18:42:33.574470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-15 18:42:33.574500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.292 qpair failed and we were unable to recover it. 00:27:48.292 [2024-07-15 18:42:33.574687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-15 18:42:33.574717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.292 qpair failed and we were unable to recover it. 00:27:48.292 [2024-07-15 18:42:33.574895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-15 18:42:33.574924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.292 qpair failed and we were unable to recover it. 00:27:48.292 [2024-07-15 18:42:33.575101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-15 18:42:33.575131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.292 qpair failed and we were unable to recover it. 00:27:48.292 [2024-07-15 18:42:33.575318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-15 18:42:33.575360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.292 qpair failed and we were unable to recover it. 00:27:48.292 [2024-07-15 18:42:33.575491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-15 18:42:33.575519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.292 qpair failed and we were unable to recover it. 00:27:48.292 [2024-07-15 18:42:33.575690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-15 18:42:33.575718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.292 qpair failed and we were unable to recover it. 00:27:48.292 [2024-07-15 18:42:33.575827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-15 18:42:33.575856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.292 qpair failed and we were unable to recover it. 00:27:48.292 [2024-07-15 18:42:33.576050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-15 18:42:33.576079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.292 qpair failed and we were unable to recover it. 00:27:48.292 [2024-07-15 18:42:33.576261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-15 18:42:33.576291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.292 qpair failed and we were unable to recover it. 00:27:48.292 [2024-07-15 18:42:33.576557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-15 18:42:33.576588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.292 qpair failed and we were unable to recover it. 00:27:48.292 [2024-07-15 18:42:33.576774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-15 18:42:33.576803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.292 qpair failed and we were unable to recover it. 00:27:48.292 [2024-07-15 18:42:33.577004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.292 [2024-07-15 18:42:33.577033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.292 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.577153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.577182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.577303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.577350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.577470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.577499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.577602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.577632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.577747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.577775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.577955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.577984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.578087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.578116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.578280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.578309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.578488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.578518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.578684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.578713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.578831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.578860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.579098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.579127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.579252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.579281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.579409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.579440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.579613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.579643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.579819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.579850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.580083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.580113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.580242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.580272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.580408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.580440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.580672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.580702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.580835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.580865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.580985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.581014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.581224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.581253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.581470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.581500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.581615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.581645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.581750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.581780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.581920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.581950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.582122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.582152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.582266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.582296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.582423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.582454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.582578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.582607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.582715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.582744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.582847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.582876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.582997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.583026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.583133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.293 [2024-07-15 18:42:33.583162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.293 qpair failed and we were unable to recover it. 00:27:48.293 [2024-07-15 18:42:33.583362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.583392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.583514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.583543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.583657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.583686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.583876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.583905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.584019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.584048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.584165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.584195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.584384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.584414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.584605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x510000 is same with the state(5) to be set 00:27:48.294 [2024-07-15 18:42:33.584889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.584958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.585092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.585125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.585252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.585282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.585421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.585454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.585636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.585666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.585840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.585870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.585999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.586029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.586170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.586200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.586457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.586488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.586594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.586624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.586806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.586836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.586949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.586978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.587107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.587136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.587257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.587288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.587533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.587563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.587768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.587798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.587914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.587944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.588046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.588075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.588200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.588229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.588418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.588448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.588623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.588652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.588778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.588807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.588931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.588960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.589152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.589180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.589465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.589495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.589622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.589651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.294 [2024-07-15 18:42:33.589771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.294 [2024-07-15 18:42:33.589805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.294 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.589976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.590006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.590175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.590204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.590322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.590363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.590469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.590498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.590621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.590651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.590756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.590785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.590897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.590926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.591108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.591137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.591266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.591296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.591568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.591599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.591857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.591887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.592007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.592037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.592204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.592232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.592501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.592532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.592718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.592748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.592927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.592956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.593128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.593157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.593331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.593368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.593549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.593579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.593826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.593856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.594033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.594063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.594257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.594287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.594431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.594461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.594578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.594606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.594789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.594818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.594988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.595017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.595220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.595249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.595368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.595398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.595637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.595667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.595949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.595978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.596150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.596179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.596304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.596332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.596582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.596612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.596879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.596909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.597015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.597044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.597160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.597189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.597320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.597371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.597595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.597624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.597800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.597830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.597954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.597988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.295 qpair failed and we were unable to recover it. 00:27:48.295 [2024-07-15 18:42:33.598226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.295 [2024-07-15 18:42:33.598255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.598389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.598419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.598544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.598573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.598741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.598770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.598938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.598967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.599072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.599102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.599281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.599311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.599438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.599468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.599583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.599612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.599795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.599824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.600003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.600032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.600143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.600171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.600358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.600388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.600575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.600605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.600729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.600759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.600927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.600956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.601136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.601165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.601388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.601418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.601604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.601634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.601821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.601850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.601976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.602005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.602123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.602153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.602332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.602381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.602577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.602607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.602778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.602808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.602935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.602964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.603153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.603183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.603332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.603372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.603510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.603540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.603720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.603749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.603864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.603894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.604069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.604098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.604282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.604311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.604574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.604605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.604740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.604770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.604965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.604994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.605113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.605143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.605357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.605388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.605564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.605594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.605728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.605763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.605934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.296 [2024-07-15 18:42:33.605963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.296 qpair failed and we were unable to recover it. 00:27:48.296 [2024-07-15 18:42:33.606080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.606110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.606222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.606251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.606426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.606456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.606558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.606588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.606767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.606797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.606983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.607012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.607147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.607176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.607307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.607344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.607565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.607595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.607763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.607793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.607897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.607926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.608042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.608071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.608262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.608291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.608421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.608451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.608624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.608653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.608773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.608802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.609014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.609043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.609159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.609189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.609295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.609324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.609434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.609464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.609656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.609685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.609795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.609824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.609940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.609970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.610149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.610179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.610413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.610445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.610562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.610592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.610763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.610801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.610980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.611009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.611132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.611162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.611357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.611388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.611600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.611629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.611800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.611829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.612010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.612039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.612157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.612187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.612290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.612320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.612448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.612478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.612686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.612715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.612898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.612928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.613034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.613069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.613308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.613347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.613520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.297 [2024-07-15 18:42:33.613550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.297 qpair failed and we were unable to recover it. 00:27:48.297 [2024-07-15 18:42:33.613679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.298 [2024-07-15 18:42:33.613708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.298 qpair failed and we were unable to recover it. 00:27:48.298 [2024-07-15 18:42:33.613828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.298 [2024-07-15 18:42:33.613857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.298 qpair failed and we were unable to recover it. 00:27:48.298 [2024-07-15 18:42:33.614049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.298 [2024-07-15 18:42:33.614079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.298 qpair failed and we were unable to recover it. 00:27:48.298 [2024-07-15 18:42:33.614194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.298 [2024-07-15 18:42:33.614223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.298 qpair failed and we were unable to recover it. 00:27:48.298 [2024-07-15 18:42:33.614348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.298 [2024-07-15 18:42:33.614378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.298 qpair failed and we were unable to recover it. 00:27:48.298 [2024-07-15 18:42:33.614501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.298 [2024-07-15 18:42:33.614530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.298 qpair failed and we were unable to recover it. 00:27:48.298 [2024-07-15 18:42:33.614724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.298 [2024-07-15 18:42:33.614754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.298 qpair failed and we were unable to recover it. 00:27:48.298 [2024-07-15 18:42:33.614954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.298 [2024-07-15 18:42:33.614983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.298 qpair failed and we were unable to recover it. 00:27:48.298 [2024-07-15 18:42:33.615118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.298 [2024-07-15 18:42:33.615147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.298 qpair failed and we were unable to recover it. 00:27:48.298 [2024-07-15 18:42:33.615386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.298 [2024-07-15 18:42:33.615416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.298 qpair failed and we were unable to recover it. 00:27:48.298 [2024-07-15 18:42:33.615523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.298 [2024-07-15 18:42:33.615553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.298 qpair failed and we were unable to recover it. 00:27:48.298 [2024-07-15 18:42:33.615737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.298 [2024-07-15 18:42:33.615767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.298 qpair failed and we were unable to recover it. 00:27:48.298 [2024-07-15 18:42:33.615897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.298 [2024-07-15 18:42:33.615926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.298 qpair failed and we were unable to recover it. 00:27:48.298 [2024-07-15 18:42:33.616168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.298 [2024-07-15 18:42:33.616198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.298 qpair failed and we were unable to recover it. 00:27:48.298 [2024-07-15 18:42:33.616331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.298 [2024-07-15 18:42:33.616369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.298 qpair failed and we were unable to recover it. 00:27:48.298 [2024-07-15 18:42:33.616542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.298 [2024-07-15 18:42:33.616572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.298 qpair failed and we were unable to recover it. 00:27:48.298 [2024-07-15 18:42:33.616834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.298 [2024-07-15 18:42:33.616863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.298 qpair failed and we were unable to recover it. 00:27:48.298 [2024-07-15 18:42:33.617031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.298 [2024-07-15 18:42:33.617061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.298 qpair failed and we were unable to recover it. 00:27:48.298 [2024-07-15 18:42:33.617245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.298 [2024-07-15 18:42:33.617275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.298 qpair failed and we were unable to recover it. 00:27:48.298 [2024-07-15 18:42:33.617472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.298 [2024-07-15 18:42:33.617503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.298 qpair failed and we were unable to recover it. 00:27:48.298 [2024-07-15 18:42:33.617618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.298 [2024-07-15 18:42:33.617647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.298 qpair failed and we were unable to recover it. 00:27:48.298 [2024-07-15 18:42:33.617763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.298 [2024-07-15 18:42:33.617793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.298 qpair failed and we were unable to recover it. 00:27:48.298 [2024-07-15 18:42:33.617909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.298 [2024-07-15 18:42:33.617938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.298 qpair failed and we were unable to recover it. 00:27:48.298 [2024-07-15 18:42:33.618175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.298 [2024-07-15 18:42:33.618204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.298 qpair failed and we were unable to recover it. 00:27:48.298 [2024-07-15 18:42:33.618314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.298 [2024-07-15 18:42:33.618368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.298 qpair failed and we were unable to recover it. 00:27:48.298 [2024-07-15 18:42:33.618496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.298 [2024-07-15 18:42:33.618526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.298 qpair failed and we were unable to recover it. 00:27:48.298 [2024-07-15 18:42:33.618639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.298 [2024-07-15 18:42:33.618668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.298 qpair failed and we were unable to recover it. 00:27:48.298 [2024-07-15 18:42:33.618842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.298 [2024-07-15 18:42:33.618871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.298 qpair failed and we were unable to recover it. 00:27:48.298 [2024-07-15 18:42:33.618987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.298 [2024-07-15 18:42:33.619017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.298 qpair failed and we were unable to recover it. 00:27:48.298 [2024-07-15 18:42:33.619132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.298 [2024-07-15 18:42:33.619161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.298 qpair failed and we were unable to recover it. 00:27:48.298 [2024-07-15 18:42:33.619357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.298 [2024-07-15 18:42:33.619388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.298 qpair failed and we were unable to recover it. 00:27:48.298 [2024-07-15 18:42:33.619579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.298 [2024-07-15 18:42:33.619610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.619779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.619810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.620011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.620041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.620158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.620188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.620484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.620516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.620642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.620672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.620789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.620824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.621004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.621034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.621144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.621174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.621358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.621389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.621557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.621587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.621827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.621857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.621977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.622008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.622138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.622168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.622290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.622320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.622437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.622467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.622600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.622629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.622865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.622895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.623008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.623037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.623163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.623193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.623371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.623403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.623584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.623613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.623792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.623822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.623998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.624028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.624214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.624243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 4076059 Killed "${NVMF_APP[@]}" "$@" 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.624378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.624409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.624522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.624552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.624744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.624773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.624887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 18:42:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:48.299 [2024-07-15 18:42:33.624918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.625036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.625066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.625202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.625233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 18:42:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:48.299 [2024-07-15 18:42:33.625356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.625388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.625519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.625550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 18:42:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:48.299 [2024-07-15 18:42:33.625662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.625692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 18:42:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:48.299 [2024-07-15 18:42:33.625869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.625900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.626003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.626033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 18:42:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:48.299 [2024-07-15 18:42:33.626273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.626303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.626438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.626469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.626588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.299 [2024-07-15 18:42:33.626618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.299 qpair failed and we were unable to recover it. 00:27:48.299 [2024-07-15 18:42:33.626802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.626832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.626937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.626967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.627157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.627187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.627367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.627398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.627614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.627645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.627766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.627796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.627901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.627931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.628045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.628075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.628197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.628226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.628332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.628375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.628561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.628591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.628761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.628791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.628981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.629011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.629115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.629144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.629242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.629271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.629385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.629416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.629545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.629574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.629703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.629732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.629834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.629870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.630136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.630164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.630283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.630311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.630561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.630591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.630827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.630856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.631033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.631062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.631229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.631258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.631366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.631396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.631515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.631545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.631663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.631692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.631807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.631837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.632017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.632047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.632150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.632179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.632293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.632322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.632448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.632479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.632656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.632686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 18:42:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=4076940 00:27:48.300 [2024-07-15 18:42:33.632801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.632831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.632948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.632978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 18:42:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 4076940 00:27:48.300 [2024-07-15 18:42:33.633155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.633186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 18:42:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.633295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.633324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 18:42:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 4076940 ']' 00:27:48.300 [2024-07-15 18:42:33.633510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.300 [2024-07-15 18:42:33.633540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.300 qpair failed and we were unable to recover it. 00:27:48.300 [2024-07-15 18:42:33.633657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.633690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 18:42:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:48.301 [2024-07-15 18:42:33.633804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.633834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.633945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.633974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 18:42:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:48.301 [2024-07-15 18:42:33.634078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.634112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.634230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.634260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.301 18:42:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:48.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.634401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.634434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 18:42:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:48.301 [2024-07-15 18:42:33.634628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.634659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.634767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.634797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.301 18:42:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.634906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.634935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.635053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.635084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.635182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.635213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.635322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.635363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.635537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.635568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.635667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.635696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.635889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.635918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.636033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.636063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.636256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.636285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.636416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.636446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.636556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.636587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.636731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.636763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.636949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.636980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.637169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.637198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.637385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.637415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.637528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.637558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.637751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.637781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.637913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.637942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.638064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.638093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.638277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.638306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.638487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.638564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.638760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.638793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.638924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.638954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.639073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.639102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.639304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.639333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.639536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.639565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.639690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.639719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.639892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.639922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.640120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.640150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.640271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.640300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.301 [2024-07-15 18:42:33.640525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.301 [2024-07-15 18:42:33.640555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.301 qpair failed and we were unable to recover it. 00:27:48.302 [2024-07-15 18:42:33.640667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.302 [2024-07-15 18:42:33.640695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.302 qpair failed and we were unable to recover it. 00:27:48.302 [2024-07-15 18:42:33.640884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.302 [2024-07-15 18:42:33.640913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.302 qpair failed and we were unable to recover it. 00:27:48.302 [2024-07-15 18:42:33.641040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.302 [2024-07-15 18:42:33.641070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.302 qpair failed and we were unable to recover it. 00:27:48.302 [2024-07-15 18:42:33.641191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.302 [2024-07-15 18:42:33.641223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.302 qpair failed and we were unable to recover it. 00:27:48.302 [2024-07-15 18:42:33.641401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.302 [2024-07-15 18:42:33.641432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.302 qpair failed and we were unable to recover it. 00:27:48.302 [2024-07-15 18:42:33.641546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.302 [2024-07-15 18:42:33.641575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.302 qpair failed and we were unable to recover it. 00:27:48.302 [2024-07-15 18:42:33.641766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.302 [2024-07-15 18:42:33.641795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.302 qpair failed and we were unable to recover it. 00:27:48.302 [2024-07-15 18:42:33.641978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.302 [2024-07-15 18:42:33.642007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.302 qpair failed and we were unable to recover it. 00:27:48.302 [2024-07-15 18:42:33.642244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.302 [2024-07-15 18:42:33.642274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.302 qpair failed and we were unable to recover it. 00:27:48.302 [2024-07-15 18:42:33.642397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.302 [2024-07-15 18:42:33.642428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.302 qpair failed and we were unable to recover it. 00:27:48.302 [2024-07-15 18:42:33.642537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.302 [2024-07-15 18:42:33.642566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.302 qpair failed and we were unable to recover it. 00:27:48.302 [2024-07-15 18:42:33.642693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.302 [2024-07-15 18:42:33.642723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.302 qpair failed and we were unable to recover it. 00:27:48.302 [2024-07-15 18:42:33.642914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.302 [2024-07-15 18:42:33.642944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.302 qpair failed and we were unable to recover it. 00:27:48.302 [2024-07-15 18:42:33.643120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.302 [2024-07-15 18:42:33.643150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.302 qpair failed and we were unable to recover it. 00:27:48.302 [2024-07-15 18:42:33.643322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.302 [2024-07-15 18:42:33.643361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.302 qpair failed and we were unable to recover it. 00:27:48.302 [2024-07-15 18:42:33.643485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.302 [2024-07-15 18:42:33.643515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.302 qpair failed and we were unable to recover it. 00:27:48.302 [2024-07-15 18:42:33.643705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.302 [2024-07-15 18:42:33.643735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.302 qpair failed and we were unable to recover it. 00:27:48.302 [2024-07-15 18:42:33.643916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.302 [2024-07-15 18:42:33.643946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.302 qpair failed and we were unable to recover it. 00:27:48.302 [2024-07-15 18:42:33.644058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.302 [2024-07-15 18:42:33.644088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.302 qpair failed and we were unable to recover it. 00:27:48.302 [2024-07-15 18:42:33.644206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.302 [2024-07-15 18:42:33.644235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.302 qpair failed and we were unable to recover it. 00:27:48.302 [2024-07-15 18:42:33.644419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.302 [2024-07-15 18:42:33.644449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.302 qpair failed and we were unable to recover it. 00:27:48.302 [2024-07-15 18:42:33.644647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.302 [2024-07-15 18:42:33.644676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.302 qpair failed and we were unable to recover it. 00:27:48.302 [2024-07-15 18:42:33.644848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.302 [2024-07-15 18:42:33.644877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.302 qpair failed and we were unable to recover it. 00:27:48.302 [2024-07-15 18:42:33.644990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.302 [2024-07-15 18:42:33.645019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.302 qpair failed and we were unable to recover it. 00:27:48.302 [2024-07-15 18:42:33.645246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.302 [2024-07-15 18:42:33.645275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.302 qpair failed and we were unable to recover it. 00:27:48.302 [2024-07-15 18:42:33.645397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.302 [2024-07-15 18:42:33.645427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.302 qpair failed and we were unable to recover it. 00:27:48.302 [2024-07-15 18:42:33.645598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.302 [2024-07-15 18:42:33.645628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.302 qpair failed and we were unable to recover it. 00:27:48.302 [2024-07-15 18:42:33.645737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.302 [2024-07-15 18:42:33.645766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.302 qpair failed and we were unable to recover it. 00:27:48.302 [2024-07-15 18:42:33.645889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.302 [2024-07-15 18:42:33.645918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.302 qpair failed and we were unable to recover it. 00:27:48.302 [2024-07-15 18:42:33.646192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-07-15 18:42:33.646226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.303 qpair failed and we were unable to recover it. 00:27:48.303 [2024-07-15 18:42:33.646359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-07-15 18:42:33.646389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.303 qpair failed and we were unable to recover it. 00:27:48.303 [2024-07-15 18:42:33.646514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-07-15 18:42:33.646542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.303 qpair failed and we were unable to recover it. 00:27:48.303 [2024-07-15 18:42:33.646710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-07-15 18:42:33.646738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.303 qpair failed and we were unable to recover it. 00:27:48.303 [2024-07-15 18:42:33.646921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-07-15 18:42:33.646950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.303 qpair failed and we were unable to recover it. 00:27:48.303 [2024-07-15 18:42:33.647194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-07-15 18:42:33.647223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.303 qpair failed and we were unable to recover it. 00:27:48.303 [2024-07-15 18:42:33.647346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-07-15 18:42:33.647376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.303 qpair failed and we were unable to recover it. 00:27:48.303 [2024-07-15 18:42:33.647477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-07-15 18:42:33.647506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.303 qpair failed and we were unable to recover it. 00:27:48.303 [2024-07-15 18:42:33.647634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-07-15 18:42:33.647662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.303 qpair failed and we were unable to recover it. 00:27:48.303 [2024-07-15 18:42:33.647768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-07-15 18:42:33.647798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.303 qpair failed and we were unable to recover it. 00:27:48.303 [2024-07-15 18:42:33.647982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-07-15 18:42:33.648011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.303 qpair failed and we were unable to recover it. 00:27:48.303 [2024-07-15 18:42:33.648183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-07-15 18:42:33.648212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.303 qpair failed and we were unable to recover it. 00:27:48.303 [2024-07-15 18:42:33.648397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-07-15 18:42:33.648427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.303 qpair failed and we were unable to recover it. 00:27:48.303 [2024-07-15 18:42:33.648556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-07-15 18:42:33.648585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.303 qpair failed and we were unable to recover it. 00:27:48.303 [2024-07-15 18:42:33.648715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-07-15 18:42:33.648745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.303 qpair failed and we were unable to recover it. 00:27:48.303 [2024-07-15 18:42:33.648935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-07-15 18:42:33.648964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.303 qpair failed and we were unable to recover it. 00:27:48.303 [2024-07-15 18:42:33.649152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-07-15 18:42:33.649182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.303 qpair failed and we were unable to recover it. 00:27:48.303 [2024-07-15 18:42:33.649368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-07-15 18:42:33.649399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.303 qpair failed and we were unable to recover it. 00:27:48.303 [2024-07-15 18:42:33.649603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-07-15 18:42:33.649632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.303 qpair failed and we were unable to recover it. 00:27:48.303 [2024-07-15 18:42:33.649743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-07-15 18:42:33.649772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.303 qpair failed and we were unable to recover it. 00:27:48.303 [2024-07-15 18:42:33.649888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-07-15 18:42:33.649917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.303 qpair failed and we were unable to recover it. 00:27:48.303 [2024-07-15 18:42:33.650177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-07-15 18:42:33.650206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.303 qpair failed and we were unable to recover it. 00:27:48.303 [2024-07-15 18:42:33.650310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-07-15 18:42:33.650350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.303 qpair failed and we were unable to recover it. 00:27:48.303 [2024-07-15 18:42:33.650484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-07-15 18:42:33.650513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.303 qpair failed and we were unable to recover it. 00:27:48.303 [2024-07-15 18:42:33.650648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-07-15 18:42:33.650677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.303 qpair failed and we were unable to recover it. 00:27:48.303 [2024-07-15 18:42:33.650789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-07-15 18:42:33.650819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.303 qpair failed and we were unable to recover it. 00:27:48.303 [2024-07-15 18:42:33.650938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-07-15 18:42:33.650967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.303 qpair failed and we were unable to recover it. 00:27:48.303 [2024-07-15 18:42:33.651079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-07-15 18:42:33.651108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.303 qpair failed and we were unable to recover it. 00:27:48.303 [2024-07-15 18:42:33.651237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.303 [2024-07-15 18:42:33.651266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.651395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.651425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.651605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.651634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.651890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.651919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.652175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.652204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.652318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.652363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.652476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.652506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.652685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.652714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.652840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.652870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.653048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.653077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.653192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.653221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.653369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.653399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.653506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.653541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.653782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.653824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.653947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.653977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.654222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.654252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.654386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.654416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.654541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.654572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.654743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.654772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.654953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.654983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.655122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.655151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.655371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.655402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.655526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.655555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.655742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.655775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.655966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.655996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.656126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.656156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.656433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.656465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.656645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.656674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.656802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.656832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.656948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.656977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.657077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.657106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.657347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.657377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.657560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.657590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.657766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.657796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.657972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.658002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.658263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.658293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.658422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.658453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.304 [2024-07-15 18:42:33.658585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.304 [2024-07-15 18:42:33.658618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.304 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.658821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.658862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.659059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.659089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.659209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.659242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.659378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.659408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.659522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.659552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.659667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.659697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.659865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.659894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.660084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.660113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.660227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.660256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.660384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.660414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.660524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.660553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.660675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.660705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.660928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.660960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.661093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.661123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.661238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.661273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.661480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.661511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.661689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.661719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.661843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.661872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.662042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.662071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.662174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.662203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.662333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.662386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.662566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.662596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.662706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.662735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.662855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.662885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.662991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.663021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.663207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.663236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.663421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.663452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.663564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.663594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.663695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.663724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.663873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.663902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.664003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.664033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.664178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.664208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.664346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.664377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.664484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.664513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.664627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.664657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.664849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.664881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.665061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.665098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.665228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.665257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.665377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.305 [2024-07-15 18:42:33.665408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.305 qpair failed and we were unable to recover it. 00:27:48.305 [2024-07-15 18:42:33.665588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.665617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.665785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.665814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.666011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.666041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.666159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.666187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.666289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.666318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.666521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.666552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.666727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.666757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.666953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.666983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.667082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.667111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.667283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.667315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.667587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.667618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.667732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.667761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.667883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.667912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.668022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.668052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.668158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.668187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.668390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.668426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.668595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.668625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.668794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.668824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.668995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.669024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.669147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.669176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.669364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.669396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.669526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.669556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.669750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.669780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.669961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.669991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.670140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.670169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.670286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.670316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.670430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.670460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.670634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.670664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.670855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.670885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.671011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.671041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.671150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.671180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.671293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.671324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.671442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.671473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.671651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.671680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.671902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.671932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.672049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.672079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.672351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.306 [2024-07-15 18:42:33.672382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-07-15 18:42:33.672497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.672527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 18:42:33.672629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.672658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 18:42:33.672826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.672856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 18:42:33.672973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.673003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 18:42:33.673120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.673149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 18:42:33.673329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.673370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 18:42:33.673498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.673526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 18:42:33.673735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.673763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 18:42:33.673939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.673968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 18:42:33.674076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.674106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 18:42:33.674233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.674262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 18:42:33.674434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.674464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 18:42:33.674568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.674613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 18:42:33.674729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.674758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 18:42:33.674872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.674901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 18:42:33.675019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.675048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 18:42:33.675169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.675199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 18:42:33.675326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.675362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 18:42:33.675575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.675610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 18:42:33.675847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.675877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 18:42:33.675976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.676005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 18:42:33.676109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.676139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 18:42:33.676266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.676296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 18:42:33.676428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.676459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 18:42:33.676642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.676671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 18:42:33.676785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.676814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 18:42:33.676913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.676942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 18:42:33.677048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.677077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 18:42:33.677169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.677203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 18:42:33.677306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.677335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 18:42:33.677461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.677490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 18:42:33.677618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.677646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-07-15 18:42:33.677762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.307 [2024-07-15 18:42:33.677790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.677893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.677922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.678024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.678052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.678160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.678188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.678362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.678393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.678494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.678522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.678685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.678714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.678885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.678914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.679018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.679046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.679215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.679244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.679416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.679446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.679553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.679581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.679689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.679718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.679850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.679879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.679999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.680028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.680129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.680158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.680347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.680377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.680549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.680579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.680694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.680723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.680899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.680928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.681143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.681172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.681297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.681326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.681448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.681478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.681484] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:27:48.308 [2024-07-15 18:42:33.681539] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:48.308 [2024-07-15 18:42:33.681586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.681616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.681802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.681830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.681999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.682061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.682289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.682365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.682491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.682523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.682648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.682677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.682786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.682817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.682946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.682977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.683090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.683119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.683225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.683256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.683380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.683411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.683514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.683544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.683644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.683673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.683770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.683800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.683904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.683934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-07-15 18:42:33.684107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.308 [2024-07-15 18:42:33.684137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.684254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.684284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.684465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.684497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.684603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.684632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.684763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.684794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.684904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.684934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.685120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.685150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.685275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.685306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.685498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.685532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.685706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.685737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.685924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.685954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.686060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.686101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.686219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.686247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.686362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.686392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.686500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.686534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.686721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.686751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.686863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.686892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.686999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.687028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.687154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.687183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.687303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.687331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.687448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.687477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.687650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.687680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.687868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.687897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.688005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.688034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.688142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.688172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.688273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.688302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.688413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.688447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.688566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.688597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.688776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.688806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.688986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.689016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.689187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.689215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.689395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.689425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.689526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.689556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.689674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.689703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.689994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.690024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.690218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.690247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.690400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.690430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.690561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.690591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.309 qpair failed and we were unable to recover it. 00:27:48.309 [2024-07-15 18:42:33.690781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.309 [2024-07-15 18:42:33.690810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.690916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.690945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.691127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.691156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.691287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.691316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.691450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.691481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.691580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.691610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.691844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.691874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.691981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.692011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.692114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.692143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.692262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.692292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.692449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.692480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.692589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.692619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.692752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.692782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.692888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.692918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.693086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.693116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.693222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.693251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.693358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.693395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.693569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.693600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.693730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.693760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.693880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.693910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.694079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.694108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.694277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.694308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.694446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.694477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.694735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.694765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.694965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.694995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.695122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.695152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.695276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.695306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.695420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.695450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.695573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.695603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.695714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.695744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.695937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.695967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.696084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.696114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.696212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.696243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.696351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.696382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.696503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.696532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.696652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.696682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.696855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.696884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.697062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.697092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.697221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.697251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.697384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.697415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.310 [2024-07-15 18:42:33.697533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.310 [2024-07-15 18:42:33.697563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.310 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.697672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.697702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.697818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.697848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.697965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.697995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.698104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.698134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.698251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.698280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.698385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.698416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.698521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.698551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.698660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.698689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.698800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.698830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.698946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.698976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.699078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.699108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.699241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.699271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.699398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.699428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.699635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.699665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.699774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.699804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.699981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.700016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.700133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.700163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.700406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.700447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.700568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.700598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.700773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.700803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.700972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.701002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.701177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.701207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.701321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.701383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.701491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.701521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.701727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.701756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.701857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.701886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.702123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.702153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.702271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.702301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.702431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.702468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.702663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.702694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.702813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.702842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.702952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.702981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.703147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.703176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.703369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.703400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.703576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.703606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.703784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.703814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.703994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.704024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.704133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.704162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.704313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.704350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.704475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.704505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.311 qpair failed and we were unable to recover it. 00:27:48.311 [2024-07-15 18:42:33.704693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.311 [2024-07-15 18:42:33.704723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.704838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.704868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.704992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.705022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.705141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.705171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.705289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.705318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.705436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.705466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.705613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.705643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.705749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.705778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.705953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.705983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.706092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.706122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.706241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.706280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.706472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.706502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.706628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.706658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.706771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.706801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.706919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.706949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.707097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.707132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.707326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.707367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.707544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.707574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.707712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.707742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.707846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.707876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.708050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.708084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.708278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.708308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.708494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.708525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.708647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.708678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.708789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.708819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.708958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.708988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.709092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.709122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.709242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.709272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.709447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.709478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.709655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.709684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.709785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.709814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.710020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.710050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.710158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.710188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.710303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.710350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.312 [2024-07-15 18:42:33.710480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.312 [2024-07-15 18:42:33.710510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.312 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.710712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.710741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.710870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.710900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.711072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.711102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.711216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.711245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.711437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.711468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.711703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.711733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.711903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.711933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.712039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.712069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.712275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.712304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.712544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.712574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.712700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.712729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.712923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.712953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.313 [2024-07-15 18:42:33.713139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.713169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.713358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.713389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.713515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.713544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.713712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.713741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.713860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.713889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.714000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.714029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.714198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.714228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.714405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.714436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.714657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.714701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.714948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.715014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.715146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.715179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.715290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.715320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.715472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.715503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.715609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.715638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.715742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.715772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.715882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.715912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.716087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.716116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.716287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.716317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.716436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.716466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.716651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.716681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.716785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.716814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.716982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.717020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.717190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.717221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.717324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.717366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.717470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.717499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.717606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.717637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.717808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.717838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.718040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.313 [2024-07-15 18:42:33.718069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.313 qpair failed and we were unable to recover it. 00:27:48.313 [2024-07-15 18:42:33.718359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.718390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.718527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.718556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.718682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.718712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.718821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.718850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.719014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.719044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.719169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.719198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.719375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.719405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.719592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.719622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.719736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.719766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.719892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.719922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.720030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.720060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.720233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.720263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.720367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.720397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.720572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.720602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.720703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.720732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.720833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.720863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.720979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.721009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.721178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.721208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.721456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.721486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.721694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.721725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.721904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.721939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.722054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.722084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.722267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.722298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.722408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.722438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.722554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.722584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.722776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.722806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.722985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.723014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.723254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.723284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.723538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.723569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.723680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.723710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.723894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.723924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.724031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.724060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.724239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.724269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.724400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.724431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.724608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.724638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.724831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.724861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.725028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.725058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.725266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.725295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.314 [2024-07-15 18:42:33.725488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.314 [2024-07-15 18:42:33.725519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.314 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.725776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.725806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.725930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.725959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.726146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.726175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.726354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.726385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.726531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.726561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.726754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.726782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.726911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.726940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.727175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.727205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.727346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.727377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.727498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.727528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.727643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.727672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.727797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.727827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.728004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.728033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.728210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.728239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.728474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.728505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.728750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.728779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.729050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.729079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.729205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.729234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.729416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.729447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.729637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.729667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.729796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.729826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.729948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.729982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.730091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.730120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.730243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.730273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.730480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.730510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.730716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.730745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.730913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.730942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.731202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.731231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.731484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.731515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.731689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.731719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.731824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.731853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.732036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.732065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.732235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.732264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.732507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.732538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.732709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.732738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.732928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.732958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.733085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.733115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.733238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.733268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.315 [2024-07-15 18:42:33.733382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.315 [2024-07-15 18:42:33.733412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.315 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.733582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.733612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.733720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.733749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.733917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.733947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.734048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.734078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.734185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.734215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.734373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.734404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.734640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.734670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.734799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.734828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.734937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.734967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.735155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.735185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.735304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.735333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.735483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.735514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.735695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.735725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.735911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.735941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.736129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.736158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.736275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.736305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.736452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.736483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.736725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.736754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.736940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.736970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.737068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.737097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.737276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.737306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.737444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.737474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.737668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.737704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.737820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.737850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.738110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.738139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.738317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.738358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.738469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.738498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.738705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.738735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.738872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.738901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.739163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.739192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.739382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.739413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.739528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.739558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.316 qpair failed and we were unable to recover it. 00:27:48.316 [2024-07-15 18:42:33.739672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.316 [2024-07-15 18:42:33.739700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.739942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.739973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.740091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.740121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.740298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.740328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.740511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.740542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.740710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.740740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.740948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.740978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.741094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.741123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.741377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.741406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.741524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.741553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.741682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.741712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.741818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.741848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.741962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.741992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.742186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.742215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.742325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.742376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.742484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.742514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.742618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.742647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.742822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.742852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.743042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.743071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.743183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.743213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.743326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.743412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.743589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.743619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.743816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.743846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.743954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.743985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.744158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.744187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.744299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.744329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.744444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.744474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.744581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.744611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.744738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.744775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.745009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.745039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.745224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.745259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.745385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.745415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.745607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.745636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.745810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.745840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.746074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.746103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.746273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.746302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.746503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.746534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.746640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.746670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.317 [2024-07-15 18:42:33.746766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.317 [2024-07-15 18:42:33.746796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.317 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.747061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.747092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.747280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.747309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.747531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.747568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.747691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.747720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.747888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.747917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.748098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.748128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.748333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.748378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.748547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.748577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.748763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.748792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.748911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.748939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.749131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.749160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.749279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.749308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.749512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.749555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.749738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.749775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.749953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.749984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.750194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.750234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.750505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.750539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.750671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.750703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.750823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.750854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.751043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.751077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.751194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.751224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.751449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.751484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.751605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.751634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.751744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.751774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.751977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.752011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.752115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.752155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.752352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.752386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.752566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.752598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.752702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.752732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.752862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.752903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.753108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.753138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.753249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.753292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.753435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.753468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.753640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.753669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.753787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.753827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.753945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.753975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.754087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.754128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.754323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.754372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.754523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.754561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.754678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.754710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.318 qpair failed and we were unable to recover it. 00:27:48.318 [2024-07-15 18:42:33.754883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.318 [2024-07-15 18:42:33.754916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.755175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.755208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.755315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.755365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.755550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.755587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.755836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.755877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.756127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.756160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.756352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.756395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.756638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.756671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.756867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.756900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.757162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.757195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.757382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.757416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.757679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.757713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.757911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.757944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.758074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.758107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.758280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.758317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.758577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.758611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.758796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.758828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.758944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.758985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.759135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.759168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.759301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.759350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.759471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.759503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.759679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.759711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.759877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.759910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.760193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.760231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.760429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.760470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.760671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.760710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.760914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.760948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.761093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.761126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.761325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.761378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.761669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.761702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.761904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.761937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.762134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.762173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.762303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.762377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.762501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.762533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.762727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.762760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.762959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.762996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.763218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.763257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.763497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.763532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.763663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.763695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.763934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.763964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.764140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.319 [2024-07-15 18:42:33.764169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.319 qpair failed and we were unable to recover it. 00:27:48.319 [2024-07-15 18:42:33.764356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.764387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.764648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.764677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.764846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.764875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.765048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.765077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.765268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.765298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.765585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.765615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.765806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.765836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.765957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.765987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.766163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.766192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.766377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:48.320 [2024-07-15 18:42:33.766404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.766433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.766698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.766728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.766859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.766889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.767017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.767046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.767243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.767272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.767446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.767478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.767668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.767699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.767944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.767974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.768092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.768123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.768386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.768418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.768604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.768633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.768867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.768896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.769093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.769122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.769326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.769370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.769576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.769606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.769812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.769841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.770028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.770057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.770179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.770209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.770473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.770504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.770764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.770794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.770961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.770991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.771189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.771225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.771446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.771478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.771616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.771647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.771832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.771862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.772041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.772071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.772184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.772213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.772357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.772388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.320 [2024-07-15 18:42:33.772573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.320 [2024-07-15 18:42:33.772603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.320 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.772778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.772808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.772940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.772970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.773091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.773120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.773272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.773302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.773508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.773540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.773752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.773789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.773965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.773995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.774165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.774195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.774394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.774426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.774634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.774666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.774879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.774911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.775036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.775067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.775197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.775229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.775437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.775469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.775646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.775677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.775846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.775876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.776069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.776099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.776229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.776258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.776447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.776477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.776607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.776637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.776771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.776801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.777012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.777041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.777296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.777325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.777459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.777488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.777609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.777638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.777741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.777770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.777937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.777966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.778250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.778279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.778406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.778436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.778548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.778578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.778745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.778775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.778967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.778997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.779210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.779272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.321 [2024-07-15 18:42:33.779489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.321 [2024-07-15 18:42:33.779523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.321 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.779710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.779740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.779979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.780010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.780268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.780297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.780546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.780577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.780756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.780786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.780975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.781004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.781210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.781240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.781383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.781413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.781596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.781626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.781813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.781843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.782048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.782077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.782262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.782291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.782490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.782522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.782699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.782729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.782914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.782943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.783124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.783153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.783333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.783372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.783556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.783585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.783700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.783729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.783914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.783943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.784149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.784178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.784353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.784384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.784564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.784594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.784718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.784747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.784928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.784957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.785132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.785167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.785294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.785324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.785561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.785591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.785774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.785804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.786004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.786034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.786239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.786269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.786457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.786487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.786745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.786774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.786949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.786979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.787107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.787137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.787322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.787358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.787534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.787563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.787753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.787782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.787960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.787989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.788171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.788201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.788377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.322 [2024-07-15 18:42:33.788408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.322 qpair failed and we were unable to recover it. 00:27:48.322 [2024-07-15 18:42:33.788591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.788622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.788855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.788885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.788997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.789027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.789200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.789230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.789416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.789447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.789656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.789687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.789868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.789897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.790063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.790093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.790213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.790242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.790419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.790451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.790578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.790608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.790717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.790752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.791009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.791039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.791186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.791216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.791459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.791490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.791595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.791625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.791929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.791959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.792136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.792166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.792403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.792434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.792640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.792670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.792853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.792882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.793058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.793088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.793270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.793300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.793478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.793508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.793618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.793648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.793868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.793907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.794103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.794133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.794250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.794280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.794457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.794488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.794653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.794683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.794866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.794896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.795064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.795094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.795205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.795235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.795357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.795387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.795591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.795621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.795804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.795834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.796037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.796067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.796320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.323 [2024-07-15 18:42:33.796360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.323 qpair failed and we were unable to recover it. 00:27:48.323 [2024-07-15 18:42:33.796543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.796578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.796761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.796792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.796914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.796943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.797152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.797181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.797312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.797353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.797539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.797568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.797826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.797855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.797968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.797998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.798187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.798217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.798395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.798426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.798633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.798662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.798825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.798855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.799090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.799119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.799356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.799387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.799522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.799552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.799661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.799691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.799895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.799925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.800037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.800067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.800266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.800296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.800434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.800464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.800604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.800633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.800805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.800834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.801002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.801031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.801148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.801177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.801277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.801306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.801442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.801473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.801656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.801686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.801825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.801860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.802057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.802086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.802281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.802311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.802431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.802462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.802578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.802607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.802849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.802877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.803057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.803085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.803212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.803241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.803422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.803453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.803646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.803676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.803854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.803884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.804066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.804096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.804334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.804372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.804482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.324 [2024-07-15 18:42:33.804518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.324 qpair failed and we were unable to recover it. 00:27:48.324 [2024-07-15 18:42:33.804690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 18:42:33.804721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 18:42:33.804959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 18:42:33.804990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 18:42:33.805175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 18:42:33.805206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 18:42:33.805381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 18:42:33.805414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 18:42:33.805635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 18:42:33.805665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 18:42:33.805852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 18:42:33.805884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 18:42:33.806144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 18:42:33.806174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 18:42:33.806280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 18:42:33.806310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 18:42:33.806623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 18:42:33.806672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 18:42:33.806857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 18:42:33.806895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 18:42:33.807087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 18:42:33.807117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 18:42:33.807246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 18:42:33.807275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 18:42:33.807454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 18:42:33.807486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 18:42:33.807672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 18:42:33.807703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 18:42:33.807833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 18:42:33.807863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 18:42:33.807988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 18:42:33.808019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 18:42:33.808304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 18:42:33.808335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 18:42:33.808552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 18:42:33.808583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 18:42:33.808766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 18:42:33.808797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 18:42:33.808917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 18:42:33.808947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 18:42:33.809123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 18:42:33.809155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 18:42:33.809330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 18:42:33.809369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 18:42:33.809586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 18:42:33.809616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 18:42:33.809732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 18:42:33.809763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 18:42:33.810051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 18:42:33.810082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 18:42:33.810199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 18:42:33.810229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 18:42:33.810333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 18:42:33.810387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 18:42:33.810575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.325 [2024-07-15 18:42:33.810606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.325 qpair failed and we were unable to recover it. 00:27:48.325 [2024-07-15 18:42:33.810806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 18:42:33.810836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 18:42:33.811010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 18:42:33.811041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 18:42:33.811223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 18:42:33.811253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 18:42:33.811501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 18:42:33.811534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 18:42:33.811736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 18:42:33.811767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 18:42:33.811949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 18:42:33.811980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 18:42:33.812243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 18:42:33.812275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 18:42:33.812411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 18:42:33.812442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 18:42:33.812557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 18:42:33.812587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 18:42:33.812819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 18:42:33.812849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 18:42:33.813031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 18:42:33.813061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 18:42:33.813257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 18:42:33.813288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 18:42:33.813567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 18:42:33.813613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 18:42:33.813893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 18:42:33.813929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 18:42:33.814103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 18:42:33.814133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 18:42:33.814268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 18:42:33.814297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 18:42:33.814491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 18:42:33.814522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 18:42:33.814645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 18:42:33.814675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 18:42:33.814872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 18:42:33.814902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 18:42:33.815077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 18:42:33.815106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 18:42:33.815346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 18:42:33.815377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 18:42:33.815569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 18:42:33.815598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 18:42:33.815735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 18:42:33.815765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 18:42:33.815956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 18:42:33.815985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 18:42:33.816119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 18:42:33.816148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 18:42:33.816349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 18:42:33.816385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 18:42:33.816577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 18:42:33.816606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 18:42:33.816812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 18:42:33.816842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.326 [2024-07-15 18:42:33.816973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.326 [2024-07-15 18:42:33.817002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.326 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 18:42:33.817191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 18:42:33.817219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 18:42:33.817414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 18:42:33.817444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 18:42:33.817565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 18:42:33.817593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.327 [2024-07-15 18:42:33.817759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.327 [2024-07-15 18:42:33.817788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.327 qpair failed and we were unable to recover it. 00:27:48.608 [2024-07-15 18:42:33.817969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.608 [2024-07-15 18:42:33.817998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.608 qpair failed and we were unable to recover it. 00:27:48.608 [2024-07-15 18:42:33.818179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.608 [2024-07-15 18:42:33.818208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.608 qpair failed and we were unable to recover it. 00:27:48.608 [2024-07-15 18:42:33.818393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.608 [2024-07-15 18:42:33.818425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.608 qpair failed and we were unable to recover it. 00:27:48.608 [2024-07-15 18:42:33.818594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.818624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.818727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.818758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.818886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.818916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.819051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.819080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.819285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.819315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.819501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.819532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.819704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.819735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.819937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.819966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.820074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.820104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.820223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.820253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.820513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.820544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.820656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.820686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.820944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.820974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.821098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.821127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.821318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.821359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.821563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.821592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.821872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.821910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.822181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.822212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.822481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.822514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.822703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.822735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.822923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.822952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.823151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.823184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.823330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.823371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.823492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.823521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.823628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.823658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.823868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.823898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.824130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.824160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.824280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.824309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.824438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.824471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.824659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.824689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.824816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.824846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.825014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.825044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.825210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.825240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.825424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.825455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.825590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.825620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.825752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.825782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.825964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.825995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.826235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.826265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.826486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.826517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.826703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.826733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.609 qpair failed and we were unable to recover it. 00:27:48.609 [2024-07-15 18:42:33.826845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.609 [2024-07-15 18:42:33.826875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.827055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.827085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.827262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.827292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.827509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.827539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.827653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.827682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.827906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.827935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.828114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.828143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.828261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.828291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.828468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.828498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.828736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.828766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.829004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.829033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.829154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.829184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.829300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.829329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.829627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.829657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.829837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.829866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.830049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.830079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.830250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.830284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.830592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.830622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.830875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.830905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.831088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.831118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.831314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.831351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.831593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.831622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.831736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.831766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.831975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.832004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.832231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.832260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.832376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.832406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.832580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.832609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.832778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.832807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.832999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.833028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.833266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.833295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.833479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.833509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.833620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.833650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.833766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.833795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.833974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.834004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.834186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.834216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.834386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.834416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.834583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.834613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.834728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.834758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.834993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.835022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.835221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.835250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.835513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.610 [2024-07-15 18:42:33.835543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.610 qpair failed and we were unable to recover it. 00:27:48.610 [2024-07-15 18:42:33.835711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.835741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.835861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.835890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.836076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.836106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.836220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.836249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.836513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.836544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.836802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.836832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.836962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.836991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.837224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.837254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.837384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.837414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.837582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.837611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.837802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.837831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.838004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.838033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.838306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.838343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.838473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.838503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.838785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.838814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.838941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.838978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.839180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.839212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.839461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.839492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.839754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.839783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.839962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.839991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.840248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.840277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.840537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.840567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.840704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.840734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.840972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.841003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.841306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.841335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.841514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.841544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.841717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.841748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.841929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.841959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.842141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.842171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.842371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.842403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.842537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.842566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.842741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.842770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.842909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.842939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.843199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.843229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.843407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.843438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.843623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.843653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.843850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.843880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.844007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.844038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.844228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.844260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.844387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.844418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.611 [2024-07-15 18:42:33.844560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.611 [2024-07-15 18:42:33.844590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.611 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.844825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.844856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.845040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.845070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.845349] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:48.612 [2024-07-15 18:42:33.845376] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:48.612 [2024-07-15 18:42:33.845384] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:48.612 [2024-07-15 18:42:33.845375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.845391] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:48.612 [2024-07-15 18:42:33.845396] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:48.612 [2024-07-15 18:42:33.845405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.845581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.845507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:48.612 [2024-07-15 18:42:33.845610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.845615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:48.612 [2024-07-15 18:42:33.845721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:48.612 [2024-07-15 18:42:33.845723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:48.612 [2024-07-15 18:42:33.845871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.845900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.846081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.846109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.846293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.846323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.846546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.846577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.846755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.846785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.847049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.847079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.847353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.847385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.847669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.847706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.847980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.848010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.848193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.848222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.848429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.848463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.848701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.848732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.848863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.848892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.849084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.849113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.849231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.849261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.849381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.849419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.849597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.849626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.849800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.849829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.850123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.850152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.850349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.850379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.850618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.850648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.850900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.850930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.851134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.851165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.851364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.851394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.851584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.851614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.851740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.851770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.852010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.852040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.852210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.852240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.852411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.852442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.852624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.852654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.852840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.852869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.852971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.853002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.853212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.853242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.612 [2024-07-15 18:42:33.853462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.612 [2024-07-15 18:42:33.853492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.612 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.853617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.853652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.853914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.853944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.854137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.854167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.854360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.854392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.854657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.854688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.854930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.854960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.855142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.855172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.855377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.855408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.855534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.855564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.855731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.855760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.855864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.855894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.856004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.856033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.856201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.856232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.856430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.856462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.856652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.856682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.856859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.856889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.857065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.857095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.857218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.857248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.857441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.857473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.857598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.857627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.857808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.857838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.858098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.858128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.858416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.858448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.858685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.858715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.858901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.858932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.859167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.859198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.859383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.859415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.859541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.859572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.859793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.859824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.859948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.859978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.860163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.860194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.860394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.860426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.860615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.860645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.613 qpair failed and we were unable to recover it. 00:27:48.613 [2024-07-15 18:42:33.860763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.613 [2024-07-15 18:42:33.860794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.860984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.861014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.861287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.861318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.861529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.861561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.861731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.861761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.861931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.861962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.862153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.862184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.862299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.862330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.862621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.862658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.862940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.862971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.863149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.863181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.863330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.863374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.863595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.863627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.863837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.863869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.864123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.864154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.864402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.864434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.864618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.864649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.864772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.864803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.865077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.865108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.865238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.865268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.865456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.865488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.865694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.865725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.865913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.865944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.866175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.866205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.866459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.866491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.866681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.866711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.866826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.866857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.867025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.867055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.867251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.867281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.867475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.867506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.867741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.867771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.867947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.867977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.868151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.868182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.868367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.868398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.868567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.868596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.868779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.868815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.869000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.869030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.869229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.869259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.869458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.869489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.869678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.869707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.869893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.869923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.614 [2024-07-15 18:42:33.870126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.614 [2024-07-15 18:42:33.870155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.614 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.870391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.870422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.870605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.870634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.870869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.870899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.871006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.871035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.871203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.871233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.871417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.871448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.871638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.871668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.871821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.871851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.872110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.872139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.872303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.872333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.872540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.872571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.872843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.872873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.873085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.873116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.873358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.873390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.873574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.873604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.873738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.873769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.874004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.874035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.874159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.874189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.874371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.874402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.874586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.874616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.874721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.874751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.874863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.874892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.875130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.875159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.875354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.875385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.875598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.875628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.875741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.875770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.875959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.875988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.876181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.876211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.876394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.876426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.876606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.876636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.876751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.876780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.877054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.877084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.877319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.877359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.877531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.877562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.877745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.877780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.877912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.877943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.878120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.878152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.878274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.878304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.878594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.878625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.878745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.878775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.878955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.615 [2024-07-15 18:42:33.878985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.615 qpair failed and we were unable to recover it. 00:27:48.615 [2024-07-15 18:42:33.879181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.879212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.879446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.879477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.879578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.879608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.879782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.879812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.879990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.880020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.880224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.880254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.880440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.880471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.880656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.880686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.880860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.880890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.881079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.881109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.881285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.881315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.881437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.881467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.881582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.881611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.881803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.881833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.881979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.882008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.882263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.882292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.882421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.882452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.882666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.882695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.882878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.882907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.883173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.883202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.883315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.883358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.883462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.883492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.883657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.883686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.883798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.883828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.884028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.884057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.884296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.884326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.884461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.884491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.884683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.884712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.884907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.884937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.885039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.885069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.885203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.885246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.885505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.885536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.885675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.885704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.885834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.885864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.886100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.886154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.886333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.886384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.886567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.886596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.886856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.886887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.887099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.887129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.887232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.887261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.887500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.887531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.616 [2024-07-15 18:42:33.887711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.616 [2024-07-15 18:42:33.887741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.616 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.887909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.887939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.888053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.888083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.888214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.888243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.888363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.888394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.888563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.888592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.888851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.888888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.889009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.889038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.889320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.889359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.889547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.889577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.889763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.889793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.889981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.890011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.890135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.890166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.890354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.890385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.890574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.890604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.890871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.890902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.891083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.891113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.891290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.891321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.891585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.891616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.891797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.891828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.891975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.892007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.892195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.892227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.892352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.892391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.892502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.892534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.892724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.892754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.892868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.892899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.893100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.893133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.893252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.893282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.893500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.893533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.893742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.893773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.893957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.893987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.894122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.894151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.894324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.894369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.894533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.894591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.617 qpair failed and we were unable to recover it. 00:27:48.617 [2024-07-15 18:42:33.894708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.617 [2024-07-15 18:42:33.894740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.894935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.894965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.895148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.895178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.895279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.895309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.895568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.895605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.895771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.895804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.895926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.895955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.896123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.896152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.896334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.896371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.896637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.896666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.896856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.896886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.897059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.897088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.897322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.897365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.897543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.897573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.897687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.897716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.897903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.897932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.898034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.898064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.898239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.898269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.898379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.898410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.898669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.898699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.898884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.898914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.899022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.899051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.899232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.899262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.899376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.899408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.899592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.899622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.899858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.899888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.900030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.900060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.900298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.900328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.900562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.900593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.900863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.900893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.901096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.901125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.901244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.901274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.901388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.901419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.901682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.901712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.901842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.901871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.902105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.902135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.902395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.902425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.902620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.902650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.902882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.902912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.903131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.903185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.903456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.618 [2024-07-15 18:42:33.903491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.618 qpair failed and we were unable to recover it. 00:27:48.618 [2024-07-15 18:42:33.903681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.903710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.903896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.903925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.904046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.904076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.904293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.904323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.904524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.904555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.904687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.904716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.904840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.904869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.905059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.905088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.905353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.905383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.905520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.905550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.905673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.905703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.905895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.905932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.906119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.906148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.906323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.906362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.906572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.906601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.906727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.906756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.906929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.906959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.907070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.907099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.907358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.907388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.907502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.907531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.907654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.907683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.907873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.907902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.908071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.908100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.908268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.908297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.908486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.908517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.908786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.908816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.909049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.909078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.909264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.909293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.909444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.909474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.909660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.909689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.909878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.909907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.910076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.910105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.910291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.910321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.910455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.910485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.910601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.910630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.910809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.910838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.911004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.911033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.911164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.911193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.911382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.911425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.911704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.911738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.911930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.619 [2024-07-15 18:42:33.911963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.619 qpair failed and we were unable to recover it. 00:27:48.619 [2024-07-15 18:42:33.912189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.912222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.912503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.912547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.912802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.912836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.912965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.913005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.913132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.913162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.913444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.913481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.913655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.913694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.913981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.914013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.914241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.914279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.914508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.914542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.914680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.914715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.914902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.914931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.915121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.915150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.915346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.915378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.915490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.915520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.915638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.915667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.915786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.915815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.916049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.916078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.916256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.916285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.916530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.916561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.916678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.916708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.916823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.916853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.917108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.917138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.917334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.917373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.917640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.917670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.917799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.917829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.918013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.918043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.918299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.918328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.918618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.918648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.918769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.918803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.918985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.919014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.919258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.919288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.919500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.919529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.919768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.919797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.919975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.920005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.920130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.920160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.920279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.920308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.920446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.920493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.920679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.920709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.920911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.920941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.620 [2024-07-15 18:42:33.921116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.620 [2024-07-15 18:42:33.921144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.620 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.921326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.921366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.921547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.921577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.921681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.921710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.921823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.921852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.922034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.922063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.922174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.922203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.922385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.922415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.922623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.922651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.922893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.922922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.923157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.923186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.923371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.923403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.923645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.923675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.923864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.923893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.924069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.924097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.924275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.924303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.924449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.924479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.924655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.924683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.924785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.924814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.925052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.925080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.925299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.925327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.925518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.925548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.925787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.925815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.925935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.925964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.926158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.926192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.926381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.926410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.926670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.926699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.926885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.926914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.927034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.927063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.927323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.927363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.927567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.927595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.927864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.927893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.928113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.928141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.928326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.928366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.928561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.928590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.928727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.928755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.929015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.621 [2024-07-15 18:42:33.929044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.621 qpair failed and we were unable to recover it. 00:27:48.621 [2024-07-15 18:42:33.929235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.929263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.929505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.929537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.929731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.929760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.929987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.930015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.930203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.930231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.930417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.930446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.930618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.930647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.930815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.930844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.931029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.931059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.931255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.931285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.931413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.931443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.931570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.931599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.931707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.931736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.931924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.931952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.932070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.932099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.932307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.932345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.932653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.932683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.932929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.932959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.933080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.933109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.933369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.933401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.933599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.933628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.933821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.933849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.933976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.934004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.934186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.934215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.934469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.934500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.934608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.934637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.934807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.934836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.935046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.935075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.935202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.935236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.935478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.935508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.935748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.935776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.935965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.935995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.936273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.936302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.936499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.936529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.936697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.936726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.936910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.936938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.937120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.937149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.937277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.937306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.937517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.937548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.937721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.937751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.937869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.622 [2024-07-15 18:42:33.937898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.622 qpair failed and we were unable to recover it. 00:27:48.622 [2024-07-15 18:42:33.938147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.938176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.938295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.938326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.938510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.938540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.938664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.938694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.938869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.938898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.939134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.939163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.939278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.939308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.939496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.939526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.939694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.939724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.939848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.939878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.940044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.940073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.940257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.940287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.940413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.940443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.940591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.940621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.940799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.940834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.941011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.941040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.941303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.941333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.941453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.941483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.941593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.941622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.941910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.941939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.942058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.942087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.942279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.942308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.942432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.942462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.942695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.942726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.942854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.942883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.943018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.943047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.943227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.943257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.943496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.943526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.943664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.943704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.943831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.943870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.944136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.944166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.944350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.944381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.944513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.944543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.944657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.944687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.944800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.944830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.945001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.945031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.945266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.945295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.945469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.945499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.945668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.945697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.945882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.945910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.946022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.946051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.623 qpair failed and we were unable to recover it. 00:27:48.623 [2024-07-15 18:42:33.946237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.623 [2024-07-15 18:42:33.946279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.946522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.946553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.946720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.946749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.946864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.946893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.947127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.947157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.947328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.947368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.947602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.947631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.947743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.947771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.947957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.947986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.948155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.948184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.948360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.948391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.948568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.948597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.948709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.948739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.948976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.949005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.949202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.949232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.949437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.949467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.949582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.949612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.949787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.949816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.950007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.950036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.950219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.950248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.950423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.950454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.950562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.950591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.950826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.950856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.951065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.951094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.951327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.951366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.951604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.951633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.951751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.951780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.951982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.952016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.952218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.952247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.952510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.952541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.952724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.952753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.953012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.953042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.953225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.953255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.953383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.953413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.953584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.953614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.953727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.953757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.953936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.953965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.954221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.954250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.954436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.954466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.954707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.954736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.954849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.624 [2024-07-15 18:42:33.954885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.624 qpair failed and we were unable to recover it. 00:27:48.624 [2024-07-15 18:42:33.955057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.955086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.955274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.955303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.955523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.955555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.955753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.955783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.956018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.956047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.956217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.956246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.956508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.956539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.956655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.956684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.956797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.956825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.957021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.957049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.957227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.957257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.957385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.957415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.957594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.957623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.957819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.957850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.958032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.958061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.958196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.958225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.958415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.958445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.958558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.958588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.958759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.958789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.958964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.958993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.959254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.959284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.959472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.959503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.959624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.959653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.959755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.959784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.959975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.960005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.960120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.960149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.960438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.960475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.960600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.960631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.960891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.960921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.961132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.961161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.961372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.961403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.961664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.961693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.961938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.961968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.962150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.962180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.962357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.962387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.962648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.962678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.962857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.962887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.963095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.963125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.963252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.963281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.963427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.963464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.625 qpair failed and we were unable to recover it. 00:27:48.625 [2024-07-15 18:42:33.963705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.625 [2024-07-15 18:42:33.963734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.963937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.963966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.964077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.964107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.964388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.964420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.964552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.964584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.964751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.964780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.964955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.964984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.965097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.965126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.965297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.965326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.965578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.965608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.965725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.965754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.965933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.965962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.966172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.966201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.966396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.966427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.966597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.966627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.966820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.966849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.967033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.967061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.967194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.967224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.967356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.967409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.967617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.967647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.967858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.967888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.968146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.968175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.968290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.968319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.968458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.968488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.968724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.968754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.968990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.969019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.969305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.969353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.969566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.969595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.969764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.969793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.969991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.970020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.970255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.970284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.970548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.970579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.970771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.970800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.970983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.971012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.971198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.971227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.971422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.971452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.626 [2024-07-15 18:42:33.971692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.626 [2024-07-15 18:42:33.971721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.626 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.971956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.971986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.972234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.972263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.972472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.972502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.972631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.972661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.972893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.972921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.973088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.973117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.973286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.973315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.973560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.973590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.973727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.973756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.974018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.974047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.974219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.974248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.974374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.974404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.974671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.974700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.974875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.974904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.975083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.975113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.975377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.975408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.975538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.975572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.975689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.975718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.975903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.975932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.976052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.976081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.976245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.976274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.976480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.976510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.976644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.976673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.976856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.976884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.976998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.977027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.977277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.977307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.977430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.977461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.977637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.977666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.977857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.977886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.978014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.978043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.978229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.978258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.978451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.978481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.978691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.978720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.978956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.978985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.979114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.979143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.979316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.979356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.979475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.979505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.979606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.979634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.979814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.979843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.980035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.980064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.980236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.627 [2024-07-15 18:42:33.980265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.627 qpair failed and we were unable to recover it. 00:27:48.627 [2024-07-15 18:42:33.980451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.980481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.980609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.980638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.980871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.980900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.981083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.981113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.981297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.981326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.981520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.981549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.981718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.981747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.981942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.981971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.982139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.982168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.982372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.982403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.982583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.982612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.982852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.982880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.983064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.983094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.983358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.983388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.983506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.983535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.983819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.983849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.984068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.984105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.984235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.984265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.984382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.984414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.984704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.984733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.984969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.984998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.985127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.985156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.985257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.985286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.985477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.985507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.985720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.985750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.985934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.985964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.986141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.986171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.986309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.986347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.986478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.986508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.986767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.986797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.986986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.987016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.987280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.987310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.987482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.987513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.987628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.987658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.987789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.987820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.987925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.987955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.988145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.988175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.988357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.988388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.988627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.988657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.988785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.988815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.989069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.628 [2024-07-15 18:42:33.989100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.628 qpair failed and we were unable to recover it. 00:27:48.628 [2024-07-15 18:42:33.989216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.989246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.989505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.989536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.989738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.989767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.990048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.990079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.990262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.990292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.990566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.990596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.990778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.990808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.990936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.990966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.991201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.991231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.991344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.991375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.991490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.991520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.991702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.991733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.991910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.991939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.992042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.992072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.992260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.992290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.992502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.992538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.992747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.992777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.992915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.992944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.993179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.993209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.993351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.993382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.993555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.993584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.993760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.993789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.994014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.994044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.994249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.994278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.994461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.994492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.994667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.994696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.994878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.994908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.995092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.995121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.995363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.995394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.995497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.995527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.995785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.995815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.995913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.995943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.996074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.996104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.996324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.996362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.996481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.996511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.996744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.996773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.996942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.996972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.997169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.997198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.997309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.997345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.997550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.997580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.629 [2024-07-15 18:42:33.997838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.629 [2024-07-15 18:42:33.997868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.629 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:33.998048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:33.998078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:33.998322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:33.998360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:33.998487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:33.998518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:33.998642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:33.998672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:33.998848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:33.998878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:33.999138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:33.999168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:33.999406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:33.999436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:33.999557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:33.999587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:33.999708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:33.999737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:33.999849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:33.999879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:34.000119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:34.000150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:34.000317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:34.000355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:34.000526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:34.000556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:34.000679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:34.000709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:34.000824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:34.000859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:34.001025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:34.001056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:34.001239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:34.001269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:34.001548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:34.001578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:34.001758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:34.001788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:34.002047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:34.002077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:34.002191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:34.002221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:34.002479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:34.002510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:34.002639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:34.002669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:34.002875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:34.002905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:34.003169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:34.003199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:34.003397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:34.003428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:34.003615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:34.003645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:34.003817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:34.003847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:34.004119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:34.004150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:34.004412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:34.004442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:34.004704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:34.004734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:34.004858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:34.004888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:34.005151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:34.005182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:34.005448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:34.005479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:34.005659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:34.005689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.630 qpair failed and we were unable to recover it. 00:27:48.630 [2024-07-15 18:42:34.005859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.630 [2024-07-15 18:42:34.005889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.006069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.006099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.006344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.006375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.006489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.006519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.006637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.006667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.006893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.006923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.007132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.007162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.007264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.007295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.007560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.007590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.007826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.007856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.008119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.008149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.008362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.008392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.008598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.008628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.008794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.008823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.008999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.009028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.009265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.009295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.009484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.009514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.009629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.009659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.009828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.009858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.009978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.010013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.010146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.010176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.010303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.010333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.010619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.010649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.010777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.010807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.011043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.011073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.011270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.011300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.011548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.011578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.011757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.011788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.011897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.011927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.012114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.012144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.012414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.012445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.012637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.012667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.012901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.012931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.013123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.013153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.013379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.013410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.013602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.013632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.013894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.013923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.014106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.014136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.014313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.014350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.014594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.014624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.014799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.631 [2024-07-15 18:42:34.014829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.631 qpair failed and we were unable to recover it. 00:27:48.631 [2024-07-15 18:42:34.015034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.015063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.015266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.015296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.015546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.015576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.015706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.015736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.015922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.015952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.016144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.016174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.016292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.016322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.016613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.016643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.016878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.016908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.017104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.017134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.017248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.017277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.017389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.017421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.017600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.017630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.017806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.017836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.018030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.018060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.018315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.018366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.018546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.018576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.018698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.018727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.018989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.019024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.019172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.019202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.019321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.019358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.019593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.019623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.019791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.019821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.019943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.019973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.020140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.020169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.020359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.020390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.020670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.020699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.020907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.020936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.021126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.021156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.021391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.021422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.021611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.021641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.021828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.021857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.022051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.022081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.022198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.022228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.022417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.022448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.022680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.022710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.022836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.022865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.023129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.023158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.023335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.023373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.023652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.023682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.023911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.632 [2024-07-15 18:42:34.023941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.632 qpair failed and we were unable to recover it. 00:27:48.632 [2024-07-15 18:42:34.024075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.024105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.024288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.024318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.024512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.024542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.024679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.024709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.024900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.024930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.025059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.025090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.025202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.025232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.025357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.025388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.025649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.025679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.025808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.025838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.026076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.026106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.026355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.026386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.026586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.026616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.026890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.026919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.027177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.027206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.027413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.027443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.027625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.027655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.027839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.027873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.028062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.028090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.028270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.028299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.028444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.028473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.028582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.028612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.028726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.028755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.028935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.028965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.029150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.029180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.029366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.029397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.029599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.029629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.029863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.029893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.030009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.030038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.030304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.030333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.030546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.030577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.030763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.030793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.030914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.030943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.031124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.031153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.031255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.031285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.031482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.031512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.031774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.031803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.031980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.032009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.032176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.032205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.032371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.032401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.032585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.633 [2024-07-15 18:42:34.032614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.633 qpair failed and we were unable to recover it. 00:27:48.633 [2024-07-15 18:42:34.032845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.032874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.033072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.033101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.033345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.033377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.033625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.033655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.033770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.033799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.033974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.034004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.034128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.034157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.034352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.034383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.034596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.034626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.034797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.034826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.034944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.034973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.035148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.035178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.035372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.035403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.035522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.035552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.035810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.035839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.035944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.035973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.036093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.036128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.036246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.036275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.036399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.036430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.036647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.036676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.036927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.036956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.037161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.037190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.037356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.037386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.037583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.037612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.037747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.037776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.037963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.037992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.038273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.038303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.038577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.038608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.038711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.038741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.039000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.039029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.039231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.039261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.039459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.039491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.039661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.039690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.039812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.039841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.634 qpair failed and we were unable to recover it. 00:27:48.634 [2024-07-15 18:42:34.039958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.634 [2024-07-15 18:42:34.039987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.040105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.040135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.040393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.040423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.040655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.040684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.040941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.040970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.041156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.041185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.041331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.041368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.041501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.041531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.041701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.041730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.041860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.041889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.042135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.042164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.042381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.042412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.042698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.042728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.042935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.042965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.043200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.043230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.043402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.043431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.043632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.043661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.043859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.043889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.044070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.044100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.044284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.044314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.044466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.044504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.044633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.044662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.044859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.044894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.045137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.045166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.045423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.045454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.045574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.045603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.045869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.045898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.046106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.046135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.046247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.046276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.046477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.046507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.046682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.046711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.046887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.046916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.047106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.047136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.047269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.047297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.047490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.047520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.047688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.047717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.047902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.047932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.048116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.048146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.048320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.048357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.048479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.048509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.048622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.048652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.048833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.048863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.049138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.049167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.049346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.049377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.635 qpair failed and we were unable to recover it. 00:27:48.635 [2024-07-15 18:42:34.049617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.635 [2024-07-15 18:42:34.049648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.049827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.049856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.050042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.050072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.050243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.050272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.050447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.050478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.050774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.050814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.050991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.051022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.051207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.051237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.051358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.051389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.051520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.051550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.051789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.051818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.052014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.052044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.052314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.052355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.052559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.052589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.052827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.052857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.052985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.053015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.053150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.053179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.053393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.053424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.053683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.053719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.053839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.053868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.053981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.054010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.054192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.054221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.054415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.054445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.054581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.054611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.054837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.054866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.055051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.055080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.055317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.055354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.055592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.055622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.055737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.055767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.056028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.056058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.056170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.056199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.056382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.056412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.056625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.056655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.056831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.056860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.057099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.057128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.057329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.057368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.057489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.057519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.057730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.057760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.057860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.057889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.058130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.058159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.058258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.058287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.058506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.058536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.058793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.058823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.059090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.059120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.636 [2024-07-15 18:42:34.059315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.636 [2024-07-15 18:42:34.059354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.636 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.059526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.059560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.059803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.059833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.060020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.060050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.060299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.060327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.060587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.060617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.060736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.060765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.061032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.061061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.061249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.061278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.061463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.061493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.061675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.061704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.061942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.061971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.062149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.062179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.062360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.062390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.062654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.062689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.062870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.062899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.063166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.063196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.063469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.063498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.063756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.063785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.063898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.063928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.064112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.064142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.064273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.064302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.064501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.064532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.064713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.064742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.064861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.064890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.065058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.065087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.065275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.065304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.065431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.065462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.065739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.065768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.065965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.065994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.066170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.066200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.066307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.066345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.066533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.066563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.066749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.066778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.066969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.066998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.067177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.067206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.067351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.067382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.067493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.067522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.067639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.067668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.067878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.067907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.068143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.068172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.068304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.068359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.068468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.068498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.068679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.068708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.069005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.637 [2024-07-15 18:42:34.069035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.637 qpair failed and we were unable to recover it. 00:27:48.637 [2024-07-15 18:42:34.069251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.069280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.069455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.069485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.069654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.069683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.069942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.069971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.070166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.070195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.070319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.070359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.070495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.070524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.070730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.070760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.070937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.070966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.071149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.071178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.071446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.071477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.071644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.071674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.071840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.071869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.072070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.072099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.072212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.072241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.072511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.072540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.072719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.072748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.072984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.073014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.073137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.073166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.073353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.073383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.073550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.073579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.073694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.073723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.073926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.073955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.074125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.074159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.074395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.074426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.074620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.074649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.074834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.074862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.075048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.075077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.075347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.075378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.075655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.075685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.075880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.075909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.076010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.076039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.076245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.076274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.076464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.076495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.076682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.076710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.076994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.077023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.077168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.077197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.077412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.077442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.077679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.077708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.077845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.077876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.638 [2024-07-15 18:42:34.078055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.638 [2024-07-15 18:42:34.078084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.638 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.078202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.078231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.078349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.078380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.078641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.078671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.078805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.078834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.078946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.078975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.079150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.079179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.079290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.079320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.079509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.079539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.079658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.079688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.079799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.079829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.080018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.080047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.080234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.080264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.080450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.080481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.080605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.080635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.080750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.080779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.080958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.080987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.081163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.081193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.081381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.081412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.081614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.081643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.081810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.081840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.081956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.081986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.082266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.082296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.082497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.082528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.082722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.082760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.082945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.082974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.083188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.083218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.083357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.083387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.083575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.083605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.083776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.083805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.083981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.084011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.084188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.084217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.084457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.084487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.084653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.084682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.084854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.084885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.085073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.085103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.085211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.085240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.085478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.085515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.085620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.085650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.085820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.085850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.086032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.086061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.086188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.086218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.086424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.086456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.086642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.086672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.086791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.086820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.086923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.086952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.087208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.087238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.087421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.087451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.639 qpair failed and we were unable to recover it. 00:27:48.639 [2024-07-15 18:42:34.087569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.639 [2024-07-15 18:42:34.087598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.087707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.087736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.087859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.087888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.088069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.088099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.088270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.088299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.088477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.088507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.088713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.088743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.088917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.088946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.089127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.089158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.089284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.089314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.089508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.089539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.089708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.089737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.089978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.090009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.090132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.090162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.090358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.090389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.090574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.090603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.090854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.090895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.091072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.091101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.091278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.091308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.091588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.091618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.091732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.091762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.091993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.092022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.092133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.092162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.092354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.092385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.092556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.092586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.092847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.092876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.092980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.093010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.093240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.093270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.093511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.093542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.093712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.093748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.093866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.093896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.094068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.094097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.094279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.094308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.094503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.094537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.094738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.094768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.094884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.094914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.095015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.095045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.095235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.095264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.095454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.095485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.095583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.095612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.095746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.095775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.095977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.096006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.096241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.096271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.096460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.096494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.096673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.096703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.096830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.096859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.096976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.097005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.097185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.640 [2024-07-15 18:42:34.097215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.640 qpair failed and we were unable to recover it. 00:27:48.640 [2024-07-15 18:42:34.097328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.097365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.097471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.097501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.097622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.097651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.097821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.097850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.098018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.098047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.098160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.098189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.098427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.098457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.098570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.098600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.098804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.098838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.098954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.098983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.099104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.099132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.099255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.099284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.099526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.099556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.099744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.099774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.099879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.099908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.100090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.100119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.100287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.100317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.100448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.100479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.100681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.100710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.100896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.100926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.101042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.101071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.101260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.101289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.101490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.101523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.101705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.101734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.101919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.101949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.102226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.102255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.102437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.102467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.102656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.102686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.102786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.102815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.102996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.103025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.103128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.103157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.103335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.103372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.103572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.103602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.103782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.103811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.104042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.104071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.104266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.104306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.104517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.104550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.104798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.104827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.105049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.105078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.105248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.105278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.105558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.105588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.105781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.105810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.105999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.106030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.106270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.106299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.106551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.106581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.106759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.106789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.106958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.106988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.107179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.641 [2024-07-15 18:42:34.107209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.641 qpair failed and we were unable to recover it. 00:27:48.641 [2024-07-15 18:42:34.107322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.107366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.107546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.107576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.107749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.107777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.107958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.107987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.108175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.108204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.108359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.108390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.108565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.108595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.108762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.108791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.108910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.108940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.109058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.109087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.109256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.109285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.109536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.109566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.109686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.109715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.109884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.109913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.110106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.110144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.110310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.110347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.110520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.110549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.110669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.110699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.110918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.110948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.111131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.111161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.111353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.111383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.111513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.111543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.111744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.111775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.111956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.111985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.112156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.112185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.112375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.112406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.112577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.112606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.112785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.112818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.113016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.113045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.113227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.113257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.113438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.113469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.113752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.113782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.113894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.113923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.114043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.114073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.114208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.114238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.114497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.114528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.114787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.114817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.114982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.115012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.115124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.115154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.115368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.115399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.115575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.115611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.642 [2024-07-15 18:42:34.115809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.642 [2024-07-15 18:42:34.115839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.642 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.116071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.116100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.116198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.116227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.116351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.116381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.116573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.116602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.116796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.116826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.117037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.117066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.117190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.117220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.117347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.117378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.117513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.117543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.117664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.117694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.117816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.117845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.118079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.118109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.118321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.118361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.118571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.118600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.118771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.118801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.118917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.118946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.119045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.119075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.119286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.119315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.119507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.119537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.119775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.119804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.119910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.119940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.120107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.120136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.120368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.120400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.120572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.120602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.120841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.120871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.121074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.121110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.121371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.121403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.121505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.121537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.121669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.121699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.121809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.121839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.121951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.121980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.122144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.122173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.122289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.122319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.122567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.122597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.122774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.122804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.122977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.123006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.123119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.123149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.123258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.123287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.123493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.123523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.123700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.123729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.123914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.123944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.124110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.124139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.124306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.124335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.124537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.124567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.124685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.124713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.124952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.124980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.125158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.125187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.125472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.125502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.125630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.125659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.125843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.643 [2024-07-15 18:42:34.125871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.643 qpair failed and we were unable to recover it. 00:27:48.643 [2024-07-15 18:42:34.126116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.126145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.126402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.126433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.126692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.126726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.126904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.126934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.127048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.127076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.127250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.127279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.127399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.127428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.127593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.127622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.127883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.127913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.128091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.128120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.128376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.128406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.128519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.128555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.128792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.128821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.129080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.129109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.129241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.129270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.129393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.129423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.129645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.129675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.129794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.129823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.129947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.129977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.130164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.130192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.130310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.130351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.130521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.130550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.130751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.130780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.130902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.130931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.131163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.131192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.131385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.131415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.131534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.131563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.131788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.131818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.131981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.132010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.132247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.132276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.132463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.132493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.132617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.132646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.132853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.132882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.133119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.133148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.133257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.133286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.133458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.133487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.133654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.133683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.133875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.133904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.134020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.134049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.134297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.134325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.134518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.134548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.134715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.134744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.134862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.134891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.135100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.135135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.135398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.135429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.135542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.135571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.135755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.135785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.135910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.135940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.136127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.136156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.136270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.136300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.136518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.136548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.644 [2024-07-15 18:42:34.136648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.644 [2024-07-15 18:42:34.136677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.644 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 18:42:34.136842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 18:42:34.136872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 18:42:34.137125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 18:42:34.137155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 18:42:34.137412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 18:42:34.137443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 18:42:34.137572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 18:42:34.137601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 18:42:34.137766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 18:42:34.137802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 18:42:34.137918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 18:42:34.137948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 18:42:34.138159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 18:42:34.138190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 18:42:34.138320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 18:42:34.138357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 18:42:34.138489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 18:42:34.138519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 18:42:34.138754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 18:42:34.138783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 18:42:34.138969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 18:42:34.138999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 18:42:34.139172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 18:42:34.139202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 18:42:34.139335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 18:42:34.139373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 18:42:34.139555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 18:42:34.139584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 18:42:34.139761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 18:42:34.139791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 18:42:34.139977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 18:42:34.140006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 18:42:34.140134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 18:42:34.140163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 18:42:34.140266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 18:42:34.140296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 18:42:34.140472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 18:42:34.140503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 18:42:34.140665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 18:42:34.140695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 18:42:34.140863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 18:42:34.140892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 18:42:34.141080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 18:42:34.141109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 18:42:34.141292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 18:42:34.141322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 18:42:34.141533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 18:42:34.141563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 18:42:34.141741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 18:42:34.141771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 18:42:34.141886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 18:42:34.141916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 18:42:34.142103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 18:42:34.142132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 18:42:34.142372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 18:42:34.142402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 18:42:34.142517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 18:42:34.142557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 18:42:34.142762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 18:42:34.142791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.645 [2024-07-15 18:42:34.143046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.645 [2024-07-15 18:42:34.143076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.645 qpair failed and we were unable to recover it. 00:27:48.927 [2024-07-15 18:42:34.143291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.927 [2024-07-15 18:42:34.143358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.927 qpair failed and we were unable to recover it. 00:27:48.927 [2024-07-15 18:42:34.143559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.927 [2024-07-15 18:42:34.143600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.927 qpair failed and we were unable to recover it. 00:27:48.927 [2024-07-15 18:42:34.143797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.927 [2024-07-15 18:42:34.143839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.927 qpair failed and we were unable to recover it. 00:27:48.927 [2024-07-15 18:42:34.144087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.927 [2024-07-15 18:42:34.144121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.927 qpair failed and we were unable to recover it. 00:27:48.927 [2024-07-15 18:42:34.144246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.927 [2024-07-15 18:42:34.144283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.927 qpair failed and we were unable to recover it. 00:27:48.927 [2024-07-15 18:42:34.144514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.927 [2024-07-15 18:42:34.144548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.927 qpair failed and we were unable to recover it. 00:27:48.927 [2024-07-15 18:42:34.144679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.927 [2024-07-15 18:42:34.144708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.927 qpair failed and we were unable to recover it. 00:27:48.927 [2024-07-15 18:42:34.144895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.927 [2024-07-15 18:42:34.144924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.927 qpair failed and we were unable to recover it. 00:27:48.927 [2024-07-15 18:42:34.145163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.927 [2024-07-15 18:42:34.145191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.927 qpair failed and we were unable to recover it. 00:27:48.927 [2024-07-15 18:42:34.145384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.927 [2024-07-15 18:42:34.145414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.927 qpair failed and we were unable to recover it. 00:27:48.927 [2024-07-15 18:42:34.145579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.927 [2024-07-15 18:42:34.145608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.927 qpair failed and we were unable to recover it. 00:27:48.927 [2024-07-15 18:42:34.145784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.927 [2024-07-15 18:42:34.145813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.927 qpair failed and we were unable to recover it. 00:27:48.927 [2024-07-15 18:42:34.145997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.927 [2024-07-15 18:42:34.146026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.927 qpair failed and we were unable to recover it. 00:27:48.927 [2024-07-15 18:42:34.146147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.927 [2024-07-15 18:42:34.146175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.927 qpair failed and we were unable to recover it. 00:27:48.927 [2024-07-15 18:42:34.146442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.927 [2024-07-15 18:42:34.146472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.927 qpair failed and we were unable to recover it. 00:27:48.927 [2024-07-15 18:42:34.146735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.927 [2024-07-15 18:42:34.146765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.927 qpair failed and we were unable to recover it. 00:27:48.927 [2024-07-15 18:42:34.146882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.927 [2024-07-15 18:42:34.146911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.927 qpair failed and we were unable to recover it. 00:27:48.927 [2024-07-15 18:42:34.147146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.927 [2024-07-15 18:42:34.147175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.927 qpair failed and we were unable to recover it. 00:27:48.927 [2024-07-15 18:42:34.147362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.927 [2024-07-15 18:42:34.147392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.927 qpair failed and we were unable to recover it. 00:27:48.927 [2024-07-15 18:42:34.147632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.927 [2024-07-15 18:42:34.147661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.927 qpair failed and we were unable to recover it. 00:27:48.927 [2024-07-15 18:42:34.147778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.927 [2024-07-15 18:42:34.147807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.927 qpair failed and we were unable to recover it. 00:27:48.927 [2024-07-15 18:42:34.148067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.927 [2024-07-15 18:42:34.148095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.927 qpair failed and we were unable to recover it. 00:27:48.927 [2024-07-15 18:42:34.148274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.148303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.148447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.148477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.148713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.148742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.148922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.148950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.149152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.149182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.149305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.149347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.149594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.149624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.149805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.149835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.150027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.150056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.150238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.150266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.150504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.150534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.150655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.150683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.150802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.150831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.150999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.151027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.151321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.151361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.151483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.151519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.151757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.151786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.152031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.152060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.152237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.152266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.152459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.152489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.152666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.152695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.152928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.152957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.153089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.153117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.153290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.153318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.153513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.153543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.153667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.153696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.153956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.153985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.154166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.154195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.154360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.154389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.154599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.154627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.154796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.154825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.154993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.155021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.155254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.155289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.155477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.155507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.155684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.155713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.155949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.155977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.156214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.156242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.156414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.156445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.156618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.156648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.156811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.156840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.157003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.157033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.928 [2024-07-15 18:42:34.157163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.928 [2024-07-15 18:42:34.157192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.928 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.157358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.157396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.157585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.157615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.157745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.157774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.157955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.157984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.158188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.158234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.158381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.158418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.158630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.158663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.158783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.158818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.158938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.158972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.159102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.159133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.159311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.159356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.159556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.159589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.159836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.159874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.160125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.160158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.160357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.160391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.160510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.160545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.160788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.160822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.161084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.161125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.161320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.161370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.161573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.161604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.161862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.161891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.162058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.162087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.162275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.162303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.162499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.162530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.162789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.162819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.163074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.163102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.163217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.163247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.163440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.163471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.163655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.163684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.163815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.163844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.163961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.163990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.164102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.164132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.164253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.164282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.164468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.164500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.164670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.164699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.164971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.165001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.165180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.165208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.165350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.165384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.165561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.165591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.165771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.165800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.929 [2024-07-15 18:42:34.165914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.929 [2024-07-15 18:42:34.165942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.929 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.166180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.166208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.166381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.166412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.166581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.166611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.166791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.166825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.167009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.167039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.167153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.167182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.167280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.167310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.167447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.167488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.167668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.167699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.167933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.167962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.168140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.168169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.168281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.168311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.168436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.168469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.168594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.168623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.168724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.168753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.168887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.168917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.169095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.169129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.169327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.169378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.169621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.169651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.169857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.169886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.170090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.170119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.170295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.170324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.170512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.170542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.170659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.170688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.170888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.170917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.171088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.171117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.171293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.171323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.171536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.171567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.171678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.171708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.171811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.171839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.172020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.172050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.172310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.172354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.172570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.172600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.172710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.172739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.172938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.172971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.173152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.173181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.173377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.173410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.173531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.173561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.173738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.173767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.173950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.173980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.930 qpair failed and we were unable to recover it. 00:27:48.930 [2024-07-15 18:42:34.174159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.930 [2024-07-15 18:42:34.174188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.174371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.931 [2024-07-15 18:42:34.174402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.174521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.931 [2024-07-15 18:42:34.174550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.174768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.931 [2024-07-15 18:42:34.174800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.175086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.931 [2024-07-15 18:42:34.175115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.175287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.931 [2024-07-15 18:42:34.175316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.175548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.931 [2024-07-15 18:42:34.175585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.175834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.931 [2024-07-15 18:42:34.175864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.176049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.931 [2024-07-15 18:42:34.176078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.176194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.931 [2024-07-15 18:42:34.176224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.176405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.931 [2024-07-15 18:42:34.176436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.176634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.931 [2024-07-15 18:42:34.176663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.176921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.931 [2024-07-15 18:42:34.176950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.177140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.931 [2024-07-15 18:42:34.177169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.177303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.931 [2024-07-15 18:42:34.177333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.177464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.931 [2024-07-15 18:42:34.177494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.177763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.931 [2024-07-15 18:42:34.177792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.177969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.931 [2024-07-15 18:42:34.177999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.178102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.931 [2024-07-15 18:42:34.178131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.178392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.931 [2024-07-15 18:42:34.178422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.178607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.931 [2024-07-15 18:42:34.178637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.178821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.931 [2024-07-15 18:42:34.178850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.179019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.931 [2024-07-15 18:42:34.179048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.179223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.931 [2024-07-15 18:42:34.179253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.179428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.931 [2024-07-15 18:42:34.179458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.179718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.931 [2024-07-15 18:42:34.179747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.179930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.931 [2024-07-15 18:42:34.179960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.180196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.931 [2024-07-15 18:42:34.180225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.180421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.931 [2024-07-15 18:42:34.180451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.180636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.931 [2024-07-15 18:42:34.180665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.180839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.931 [2024-07-15 18:42:34.180869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.180997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.931 [2024-07-15 18:42:34.181027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.181283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.931 [2024-07-15 18:42:34.181313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.931 qpair failed and we were unable to recover it. 00:27:48.931 [2024-07-15 18:42:34.181438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.181469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.181634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.181664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.181899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.181927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.182115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.182144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.182314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.182353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.182522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.182552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.182788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.182818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.183076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.183105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.183356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.183387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.183513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.183542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.183784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.183819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.184021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.184051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.184320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.184359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.184601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.184631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.184915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.184944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.185202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.185231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.185418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.185449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.185554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.185583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.185784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.185814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.185944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.185973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.186095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.186124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.186224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.186252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.186444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.186474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.186591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.186621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.186799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.186829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.187026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.187055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.187221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.187251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.187515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.187545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.187726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.187756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.187876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.187905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.188024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.188053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.188265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.188294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.188533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.188563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.188665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.188695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.188887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.188916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.189024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.189053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.189243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.189272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.189392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.189424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.189553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.189582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.189817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.189847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.190025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.932 [2024-07-15 18:42:34.190054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.932 qpair failed and we were unable to recover it. 00:27:48.932 [2024-07-15 18:42:34.190318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.190360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.190533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.190563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.190746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.190775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.190964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.190992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.191235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.191266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.191447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.191478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.191712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.191742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.191908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.191938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.192118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.192147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.192331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.192374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.192633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.192663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.192782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.192812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.192979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.193009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.193178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.193207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.193443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.193474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.193655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.193685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.193950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.193979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.194104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.194134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.194257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.194286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.194468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.194498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.194758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.194788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.195021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.195051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.195286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.195316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.195500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.195530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.195711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.195741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.195993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.196023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.196236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.196266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.196379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.196420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.196664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.196693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.196816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.196846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.196960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.196990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.197232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.197261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.197507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.197537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.197720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.197749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.197931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.197960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.198138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.198168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.198306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.198335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.198540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.198570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.198763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.198793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.199066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.199096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.199266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.933 [2024-07-15 18:42:34.199296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.933 qpair failed and we were unable to recover it. 00:27:48.933 [2024-07-15 18:42:34.199509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.199539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.199778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.199808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.199994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.200024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.200153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.200183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.200371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.200402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.200588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.200618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.200743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.200774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.200956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.200985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.201110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.201146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.201328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.201381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.201639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.201668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.201858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.201888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.202004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.202034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.202135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.202165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.202428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.202459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.202694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.202724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.202906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.202936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.203112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.203142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.203250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.203280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.203476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.203507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.203693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.203723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.203850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.203879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.204149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.204179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.204299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.204330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.204473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.204504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.204652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.204682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.204792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.204822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.205062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.205092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.205262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.205292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.205475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.205505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.205614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.205645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.205827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.205857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.205979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.206008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.206129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.206158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.206401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.206432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.206563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.206593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.206727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.206757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.206968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.206997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.207180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.207210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.207424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.207455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.207673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.934 [2024-07-15 18:42:34.207703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.934 qpair failed and we were unable to recover it. 00:27:48.934 [2024-07-15 18:42:34.207819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.207849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.208037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.208067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.208237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.208266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.208376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.208407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.208584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.208614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.208750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.208779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.208897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.208926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.209095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.209130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.209367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.209397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.209586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.209615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.209807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.209837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.209968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.209997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.210166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.210195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.210431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.210462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.210582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.210611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.210744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.210773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.210942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.210971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.211088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.211117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.211288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.211318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.211535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.211565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.211693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.211723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.211836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.211866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.212051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.212081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.212348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.212378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.212479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.212509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.212717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.212747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.212928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.212957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.213060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.213090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.213211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.213241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.213381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.213412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.213525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.213555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.213741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.213770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.213871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.213901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.214021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.214051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.214315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.214354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.214484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.214514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.214749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.214778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.935 qpair failed and we were unable to recover it. 00:27:48.935 [2024-07-15 18:42:34.214945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.935 [2024-07-15 18:42:34.214975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.215155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.215184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.215422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.215453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.215726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.215756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.215878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.215907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.216173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.216203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.216415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.216446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.216628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.216657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.216841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.216871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.217056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.217085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.217266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.217302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.217479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.217509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.217689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.217719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.217977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.218007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.218191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.218220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.218405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.218436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.218607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.218637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.218839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.218869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.219039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.219068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.219256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.219285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.219540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.219571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.219750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.219780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.219965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.219994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.220104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.220133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.220396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.220427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.220638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.220668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.220908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.220938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.221108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.221137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.221261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.221291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.221420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.221451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.221639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.221668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.221787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.221817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.221934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.221964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.222134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.222163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.222432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.222462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.222583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.222613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.222795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.222825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.223014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.223044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.936 [2024-07-15 18:42:34.223303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.936 [2024-07-15 18:42:34.223333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.936 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.223579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.223609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.223845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.223875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.223991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.224021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.224242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.224271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.224457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.224488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.224691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.224721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.224912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.224942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.225122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.225152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.225354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.225384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.225563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.225593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.225775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.225805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.226063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.226098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.226298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.226327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.226466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.226496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.226736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.226765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.227016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.227045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.227242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.227271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.227459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.227489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.227751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.227779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.228036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.228065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.228301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.228330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.228525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.228556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.228818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.228848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.228961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.228990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.229251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.229280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.229437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.229467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.229660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.229689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.229884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.229912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.230095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.230123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.230385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.230416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.230536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.230565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.230733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.230762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.230931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.230960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.231255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.231285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.231570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.231600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.231844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.231873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.231998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.232026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.232216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.232244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.232500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.232530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.232773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.232802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.937 [2024-07-15 18:42:34.232917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.937 [2024-07-15 18:42:34.232947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.937 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.233130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.233159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.233265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.233293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.233525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.233556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.233824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.233853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.234041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.234070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.234256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.234286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.234528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.234558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.234794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.234823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.234947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.234976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.235155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.235185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.235357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.235393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.235574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.235603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.235836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.235865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.236078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.236107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.236352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.236382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.236581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.236611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.236795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.236826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.237082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.237110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.237245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.237274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.237492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.237522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.237691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.237719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.237884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.237913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.238033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.238061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.238231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.238259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.238380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.238410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.238529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.238557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.238731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.238761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.238966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.238995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.239182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.239211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.239334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.239373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.239548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.239578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.239755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.239784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.239961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.239990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.240169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.240198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.240439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.240469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.240638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.240667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.240849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.240878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.241100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.241142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.241405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.241438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.241677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.938 [2024-07-15 18:42:34.241707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.938 qpair failed and we were unable to recover it. 00:27:48.938 [2024-07-15 18:42:34.241905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.241935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.242174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.242204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.242387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.242418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.242544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.242573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.242764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.242793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.243034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.243063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.243195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.243224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.243418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.243448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.243704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.243733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.244000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.244029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.244194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.244222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.244503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.244537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.244802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.244831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.244996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.245026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.245266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.245295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.245497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.245527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.245658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.245687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.245853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.245883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.246139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.246168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.246335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.246376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.246612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.246641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.246804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.246833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.246944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.246973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.247177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.247206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.247419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.247455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.247662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.247691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.247811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.247840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.248017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.248046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.248325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.248363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.248551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.248581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.248694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.248723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.248963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.248993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.249162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.249191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.249382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.249413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.249607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.249636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.249807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.249836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.250042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.250072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.250198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.250227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.250413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.250444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.250621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.250651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.250882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.250912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.939 [2024-07-15 18:42:34.251037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.939 [2024-07-15 18:42:34.251067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.939 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.251298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.251327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.251574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.251604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.251769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.251799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.251914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.251943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.252060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.252090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.252276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.252306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.252457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.252488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.252677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.252707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.252891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.252921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.253097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.253131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.253304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.253333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.253608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.253639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.253824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.253853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.254098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.254127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.254359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.254390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.254632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.254662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.254924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.254955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.255244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.255274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.255408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.255438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.255626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.255656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.255860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.255889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.255998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.256028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.256204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.256234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.256350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.256393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.256643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.256672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.256793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.256822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.257105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.257134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.257265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.257294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.257484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.257515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.257751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.257780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.257895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.257924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.258109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.258138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.258317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.258356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.258476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.258506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.258681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.940 [2024-07-15 18:42:34.258710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.940 qpair failed and we were unable to recover it. 00:27:48.940 [2024-07-15 18:42:34.258878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.258908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.259162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.259202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.259469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.259499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.259666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.259696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.259880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.259908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.260042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.260071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.260193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.260222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.260347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.260377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.260511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.260540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.260793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.260821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.260948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.260978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.261159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.261187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.261415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.261444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.261679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.261708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.261875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.261904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.262112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.262142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.262394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.262423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.262553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.262582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.262861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.262890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.263149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.263177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.263294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.263323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.263467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.263497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.263627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.263656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.263774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.263803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.264003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.264032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.264215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.264244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.264477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.264508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.264637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.264665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.264852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.264886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.265003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.265032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.265147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.265176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.265286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.265314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.265449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.265480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.265665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.265695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.265956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.265986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.266125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.266154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.266269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.266298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.266577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.266608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.266846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.266875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.267055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.267084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.267253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.941 [2024-07-15 18:42:34.267282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.941 qpair failed and we were unable to recover it. 00:27:48.941 [2024-07-15 18:42:34.267469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.267505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.267695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.267724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.267962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.267991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.268120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.268148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.268272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.268301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.268496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.268527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.268732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.268761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.268940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.268968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.269155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.269184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.269369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.269399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.269637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.269666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.269774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.269803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.269920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.269948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.270128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.270156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.270422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.270452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.270568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.270597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.270834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.270863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.270972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.271001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.271177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.271206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.271394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.271424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.271598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.271627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.271808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.271837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.272010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.272038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.272225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.272254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.272363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.272394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.272584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.272613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.272745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.272774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.272980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.273017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.273203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.273233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.273424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.273456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.273639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.273668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.273768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.273798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.274010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.274040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.274223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.274253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.274438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.274468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.274661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.274690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.274875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.274904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.275099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.275128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.275300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.275329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.275605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.275635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.942 [2024-07-15 18:42:34.275870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.942 [2024-07-15 18:42:34.275906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.942 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.276074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.276103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.276287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.276317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.276518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.276548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.276737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.276766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.276870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.276899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.277066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.277096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.277234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.277263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.277440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.277471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.277596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.277626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.277747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.277777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.277895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.277924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.278023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.278056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.278258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.278287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.278608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.278639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.278761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.278789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.279045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.279074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.279194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.279224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.279422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.279453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.279619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.279649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.279819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.279848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.279977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.280007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.280190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.280219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.280427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.280458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.280725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.280755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.280940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.280969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.281216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.281246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.281469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.281506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.281635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.281664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.281839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.281868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.282035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.282064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.282182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.282211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.282540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.282573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.282752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.282782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.282966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.282997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.283178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.283207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.283470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.283502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.283713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.283743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.283942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.283971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.284163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.284193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.284361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.284392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.943 [2024-07-15 18:42:34.284598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.943 [2024-07-15 18:42:34.284628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.943 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.284750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.284780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.284965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.284994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.285162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.285192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.285366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.285396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.285576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.285606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.285795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.285825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.285961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.285991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.286166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.286197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.286470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.286504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.286755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.286784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.286902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.286931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.287056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.287085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.287319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.287369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.287482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.287513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.287729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.287758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.287871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.287901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.288132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.288162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.288386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.288417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.288674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.288704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.288947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.288977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.289147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.289177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.289297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.289327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.289513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.289543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.289761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.289790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.290049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.290078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.290247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.290276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.290421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.290451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.290638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.290667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.290843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.290873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.290996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.291025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.291146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.291175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.291286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.291316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.291463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.291494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.291686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.291716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.291889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.291919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.944 qpair failed and we were unable to recover it. 00:27:48.944 [2024-07-15 18:42:34.292018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.944 [2024-07-15 18:42:34.292048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.292232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.292261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.292374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.292405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.292515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.292544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.292736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.292765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.292938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.292967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.293143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.293173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.293409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.293440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.293553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.293582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.293775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.293805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.293921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.293951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.294061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.294090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.294206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.294236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.294432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.294462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.294643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.294672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.294841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.294871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.294989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.295019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.295126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.295156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.295282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.295316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.295495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.295525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.295638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.295667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.295792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.295822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.295994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.296024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.296146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.296175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.296408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.296439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.296619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.296648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.296824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.296854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.297049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.297078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.297206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.297234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.297402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.297432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.297560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.297589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.297696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.297725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.297903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.297933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.298038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.298066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.298182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.298213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.298408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.298438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.298626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.298654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.298859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.298889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.299009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.299037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.299214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.299243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.299525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.299555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.945 [2024-07-15 18:42:34.299736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.945 [2024-07-15 18:42:34.299765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.945 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.299939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.299968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.300087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.300115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.300226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.300255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.300466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.300501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.300686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.300715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.300974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.301002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.301250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.301279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.301473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.301503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.301689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.301719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.301912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.301941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.302056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.302085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.302271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.302300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.302431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.302461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.302630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.302659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.302897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.302926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.303036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.303066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.303186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.303215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.303414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.303444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.303564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.303593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.303699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.303728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.303902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.303932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.304045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.304075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.304309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.304368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.304474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.304503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.304614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.304644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.304753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.304783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.304901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.304930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.305048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.305078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.305313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.305351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.305465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.305495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.305663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.305693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.305937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.305967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.306136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.306166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.306295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.306324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.306520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.306550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.306786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.306816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.306942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.306971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.307148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.307177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.307295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.307324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.307518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.307547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.307746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.946 [2024-07-15 18:42:34.307776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.946 qpair failed and we were unable to recover it. 00:27:48.946 [2024-07-15 18:42:34.307963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.307992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.308179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.308208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.308403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.308433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.308601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.308640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.308849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.308879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.309003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.309033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.309210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.309240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.309436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.309466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.309710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.309740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.309854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.309883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.309989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.310018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.310196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.310225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.310398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.310428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.310557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.310587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.310692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.310721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.310848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.310877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.311021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.311051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.311177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.311207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.311446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.311477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.311670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.311700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.311870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.311899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.312068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.312097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.312213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.312242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.312427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.312459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.312628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.312658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.312834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.312863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.313069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.313098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.313215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.313244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.313483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.313514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.313703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.313732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.313903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.313937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.314112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.314141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.314255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.314283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.314487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.314517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.314695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.314725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.314909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.314938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.315046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.315076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.315245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.315275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.315444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.315474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.315589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.315619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.315806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.315835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.947 [2024-07-15 18:42:34.316027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.947 [2024-07-15 18:42:34.316057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.947 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.316291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.316320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.316520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.316551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.316772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.316811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.316937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.316968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.317084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.317115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.317292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.317322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.317502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.317532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.317749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.317779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.317962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.317992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.318122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.318152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.318323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.318363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.318563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.318593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.318780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.318809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.318991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.319021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.319206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.319235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.319423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.319460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.319704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.319734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.319993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.320023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.320284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.320314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.320576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.320607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.320727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.320758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.320994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.321024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.321159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.321189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.321356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.321387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.321513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.321543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.321657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.321686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.321859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.321889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.321988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.322018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.322122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.322152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.322326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.322365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.322533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.322563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.322807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.322836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.323074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.323103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.323224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.323253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.948 [2024-07-15 18:42:34.323446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.948 [2024-07-15 18:42:34.323476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.948 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.323597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.323626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.323828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.323858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.324028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.324058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.324317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.324353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.324535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.324565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.324740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.324769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.324948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.324978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.325172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.325207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.325449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.325480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.325603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.325632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.325751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.325780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.325880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.325909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.326175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.326203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.326370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.326399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.326581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.326610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.326717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.326746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.326864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.326893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.327089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.327118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.327237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.327267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.327441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.327472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.327621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.327655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.327771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.327801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.328006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.328035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.328278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.328307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.328521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.328551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.328817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.328846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.328981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.329011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.329133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.329162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.329288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.329317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.329511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.329541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.329735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.329763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.329942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.329971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.330087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.330116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.330240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.330269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.330449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.330479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.330587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.330616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.330794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.330824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.330937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.330967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.331148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.331178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.331358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.331388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.949 [2024-07-15 18:42:34.331578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.949 [2024-07-15 18:42:34.331608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.949 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.331721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.331752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.331920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.331949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.332202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.332232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.332368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.332398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.332597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.332627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.332739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.332768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.332964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.333005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.333209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.333243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.333371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.333403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.333635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.333664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.333864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.333894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.334166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.334196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.334383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.334413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.334528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.334557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.334741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.334770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.334884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.334913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.335200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.335230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.335402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.335431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.335637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.335666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.335864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.335894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.336114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.336143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.336252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.336282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.336555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.336585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.336706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.336735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.336913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.336942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.337125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.337154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.337331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.337372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.337553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.337582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.337794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.337822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.338008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.338036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.338224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.338254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.338495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.338524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.338727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.338756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.338939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.338973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.339217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.339246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.339454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.339483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.339607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.339636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.339754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.339784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.339965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.339994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.340170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.340199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.340422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.950 [2024-07-15 18:42:34.340452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.950 qpair failed and we were unable to recover it. 00:27:48.950 [2024-07-15 18:42:34.340644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.340673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.340944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.340973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.341102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.341131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.341263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.341292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.341469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.341498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.341708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.341738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.341924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.341954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.342119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.342149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.342316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.342354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.342537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.342566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.342801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.342830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.343089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.343118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.343403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.343432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.343616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.343645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.343882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.343912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.344097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.344125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.344381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.344412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.344593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.344622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.344804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.344834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.345065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.345095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.345233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.345262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.345431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.345461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.345641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.345670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.345849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.345878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.346071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.346101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.346267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.346297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.346472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.346501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.346680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.346709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.346910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.346940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.347113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.347142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.347362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.347393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.347570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.347599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.347788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.347818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.348017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.348053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.348230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.348260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.348506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.348536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.348649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.348679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.348850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.348880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.349010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.349039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.349168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.349198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.349298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.951 [2024-07-15 18:42:34.349327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.951 qpair failed and we were unable to recover it. 00:27:48.951 [2024-07-15 18:42:34.349521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.349551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.349791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.349827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.349993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.350022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.350130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.350160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.350399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.350432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.350641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.350676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.350846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.350875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.351073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.351102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.351287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.351315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.351445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.351475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.351671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.351700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.351832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.351861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.351964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.351993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.352168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.352198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.352410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.352440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.352675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.352705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.352885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.352915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.353026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.353056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.353321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.353357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.353628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.353661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.353794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.353824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.354027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.354056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.354272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.354301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.354439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.354470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.354662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.354691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.354899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.354929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.355188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.355217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.355316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.355358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.355535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.355564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.355763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.355792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.355908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.355938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.356056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.356085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.356267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.356302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.356494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.356525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.356642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.356671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.356938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.356967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.357141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.357171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.357409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.357439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.357678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.357707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.357834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.357863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.952 [2024-07-15 18:42:34.358096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.952 [2024-07-15 18:42:34.358126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.952 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.358368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.358398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.358510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.358539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.358667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.358697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.358956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.358986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.359105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.359135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.359319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.359359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.359600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.359630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.359745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.359775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.360038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.360067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.360300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.360330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.360464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.360493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.360681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.360711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.360889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.360918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.361175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.361205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.361381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.361412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.361579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.361608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.361712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.361741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.361860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.361893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.362023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.362057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.362245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.362275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.362460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.362491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.362666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.362694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.362880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.362909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.363088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.363117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.363310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.363346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.363533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.363562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.363700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.363728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.363901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.363930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.364062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.364091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.364294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.364323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.364509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.364539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.364725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.364761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.364947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.364976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.365157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.365186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.953 qpair failed and we were unable to recover it. 00:27:48.953 [2024-07-15 18:42:34.365367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.953 [2024-07-15 18:42:34.365397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.365586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.365615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.365793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.365822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.365999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.366035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.366213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.366242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.366357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.366388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.366554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.366582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.366850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.366878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.367113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.367142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.367268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.367297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.367584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.367614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.367799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.367830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.367947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.367975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.368176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.368205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.368320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.368360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.368545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.368575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.368778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.368807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.368923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.368952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.369151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.369179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.369374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.369405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.369591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.369619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.369866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.369895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.370138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.370167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.370354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.370384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.370535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.370572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.370839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.370869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.371111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.371141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.371331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.371373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.371559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.371589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.371824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.371854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.372047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.372077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.372207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.372236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.372476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.372507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.372693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.372723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.372910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.372940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.373148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.373178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.373357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.373389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.373567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.373604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.373795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.373825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.373940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.373969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.374185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.374215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.954 qpair failed and we were unable to recover it. 00:27:48.954 [2024-07-15 18:42:34.374352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.954 [2024-07-15 18:42:34.374383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.374646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.374675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.374780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.374810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.374993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.375023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.375204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.375234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.375421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.375452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.375634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.375664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.375838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.375868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.376061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.376091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.376359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.376389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.376603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.376633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.376807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.376837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.377032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.377061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.377178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.377208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.377382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.377414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.377586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.377615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.377800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.377829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.377960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.377990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.378164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.378194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.378452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.378483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.378650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.378680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.378849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.378878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.379112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.379142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.379371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.379407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.379667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.379697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.379906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.379936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.380106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.380136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.380245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.380275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.380513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.380543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.380726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.380756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.380941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.380971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.381148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.381178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.381372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.381402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.381517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.381547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.381732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.381761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.381953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.381982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.382214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.382244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.382434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.382465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.382734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.382768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.382893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.382922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.383025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.955 [2024-07-15 18:42:34.383055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.955 qpair failed and we were unable to recover it. 00:27:48.955 [2024-07-15 18:42:34.383189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.383219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.383473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.383508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.383792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.383823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.384011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.384041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.384209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.384239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.384448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.384478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.384671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.384701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.384908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.384938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.385052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.385082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.385290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.385320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.385464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.385495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.385659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.385689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.385862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.385892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.386016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.386044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.386228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.386255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.386464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.386493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.386663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.386692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.386887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.386915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.387173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.387201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.387333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.387368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.387549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.387577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.387814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.387842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.388056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.388090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.388307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.388335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.388456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.388485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.388721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.388751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.388988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.389016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.389128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.389156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.389396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.389425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.389604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.389632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.389755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.389783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.389962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.389992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.390159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.390186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.390318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.390357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.390620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.390648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.390904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.390933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.391203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.391231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.391374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.391404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.391530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.391558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.391677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.391706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.956 [2024-07-15 18:42:34.391884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.956 [2024-07-15 18:42:34.391914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.956 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.392177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.392207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.392344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.392375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.392609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.392639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.392825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.392855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.392969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.392999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.393173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.393202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.393310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.393348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.393534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.393564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.393776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.393806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.393988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.394018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.394184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.394214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.394342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.394384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.394686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.394716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.394900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.394930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.395045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.395075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.395196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.395226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.395324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.395363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.395554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.395583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.395766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.395796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.395910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.395939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.396052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.396081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.396282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.396317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.396494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.396526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.396705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.396735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.397003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.397033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.397146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.397176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.397277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.397307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.397489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.397519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.397698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.397728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.397855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.397885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.398133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.398162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.398277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.398307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.398590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.398625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.398751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.398781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.398902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.957 [2024-07-15 18:42:34.398931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.957 qpair failed and we were unable to recover it. 00:27:48.957 [2024-07-15 18:42:34.399134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.399164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.399422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.399452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.399621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.399650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.399884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.399914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.400079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.400109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.400299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.400329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.400449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.400478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.400661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.400690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.400935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.400964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.401092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.401122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.401345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.401375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.401555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.401585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.401782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.401812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.402061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.402091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.402354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.402385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.402639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.402669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.402915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.402944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.403131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.403160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.403347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.403378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.403610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.403641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.403834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.403863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.404049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.404078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.404245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.404275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.404395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.404425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.404658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.404687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.404800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.404841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.405070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.405106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.405285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.405315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.405523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.405553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.405689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.405719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.405849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.405879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.406048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.406078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.406260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.406289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.406471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.406501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.406680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.406710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.406822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.406852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.407036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.407066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.407260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.407290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.407433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.407463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.407586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.407616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.958 [2024-07-15 18:42:34.407800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.958 [2024-07-15 18:42:34.407830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.958 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.408088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.408118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.408298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.408327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.408531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.408562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.408694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.408723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.408862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.408892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.409160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.409189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.409364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.409405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.409523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.409552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.409669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.409699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.409872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.409902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.410086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.410115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.410238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.410268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.410443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.410475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.410668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.410698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.411009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.411039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.411224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.411254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.411494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.411525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.411658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.411687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.411919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.411949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.412064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.412094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.412355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.412386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.412641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.412671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.412929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.412959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.413072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.413102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.413354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.413386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.413507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.413549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.413814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.413844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.414017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.414047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.414155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.414185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.414447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.414477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.414660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.414690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.414866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.414897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.415017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.415046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.415162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.415192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.415409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.415440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.415542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.415571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.415859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.415889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.416140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.416170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.416428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.416458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.416631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.416661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.416863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.416893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.417077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.417107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.417277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.417307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.959 qpair failed and we were unable to recover it. 00:27:48.959 [2024-07-15 18:42:34.417510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.959 [2024-07-15 18:42:34.417545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.417671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.417700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.417824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.417853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.418019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.418049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.418282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.418312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.418428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.418459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.418737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.418767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.418954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.418983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.419120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.419150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.419345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.419375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.419486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.419516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.419692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.419722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.419845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.419875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.420047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.420077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.420309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.420347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.420576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.420606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.420806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.420836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.421004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.421034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.421287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.421318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.421571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.421601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.421731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.421760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.421924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.421954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.422142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.422177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.422292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.422321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.422446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.422476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.422641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.422672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.422850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.422880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.423069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.423098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.423281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.423311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.423449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.423490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.423691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.423724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.423907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.423937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.424056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.424087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.424355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.424393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.424524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.424554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.424689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.424719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.424831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.424861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.424978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.425008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.425121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.425150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.425333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.425381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.425570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.425602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.425782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.425812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.425919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.425950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.426132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.426162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.960 [2024-07-15 18:42:34.426356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.960 [2024-07-15 18:42:34.426387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.960 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.426562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.426591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.426764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.426794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.426999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.427029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.427206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.427236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.427492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.427524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.427736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.427767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.427976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.428006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.428177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.428208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.428445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.428476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.428736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.428766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.429008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.429038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.429162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.429191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.429420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.429451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.429641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.429671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.429780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.429809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.430085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.430116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.430233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.430263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.430392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.430428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.430615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.430645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.430825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.430856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.430978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.431008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.431121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.431151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.431382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.431414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.431647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.431677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.431881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.431911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.432197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.432227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.432461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.432492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.432685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.432715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.432836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.432866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.432992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.433022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.433130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.433161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.433430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.433461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.433706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.433736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.433968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.433999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.434167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.434198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.434401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.434432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.961 qpair failed and we were unable to recover it. 00:27:48.961 [2024-07-15 18:42:34.434612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.961 [2024-07-15 18:42:34.434641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.434822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.434853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.434957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.434987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.435152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.435182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.435368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.435400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.435505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.435535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.435713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.435743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.435877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.435907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.436113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.436147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.436279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.436309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.436425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.436456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.436626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.436656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.436833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.436864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.437040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.437070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.437332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.437372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.437485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.437515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.437648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.437678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.437915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.437945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.438124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.438154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.438272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.438302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.438569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.438601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.438725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.438760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.438946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.438976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.439233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.439263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.439488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.439519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.439708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.439738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.439945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.439974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.440162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.440192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.440476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.440506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.440685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.440714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.440906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.440936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.441170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.441199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.441480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.441512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.441752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.441782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.441969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.442000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.442262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.442292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.442487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.442518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.442689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.442719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.442911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.442941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.443180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.443210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.443379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.443411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.443524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.443553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.443759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.443790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.443909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.443939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.444043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.444073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.444255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.444286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.444426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.444456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.962 qpair failed and we were unable to recover it. 00:27:48.962 [2024-07-15 18:42:34.444592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.962 [2024-07-15 18:42:34.444622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.444774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.444812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.444997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.445027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.445197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.445227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.445354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.445385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.445528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.445558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.445758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.445788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.445989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.446019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.446209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.446239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.446422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.446453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.446569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.446599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.446810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.446840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.447109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.447139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.447376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.447406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.447545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.447574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.447749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.447779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.448072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.448101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.448201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.448231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.448361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.448392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.448608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.448638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.448867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.448897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.449176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.449207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.449407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.449437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.449602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.449633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.449816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.449846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.449977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.450007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.450190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.450220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.450352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.450383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.450581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.450617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.450810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.450839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.451099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.451128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.451362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.451393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.451655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.451684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.451894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.451923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.452090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.452119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.452307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.452345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.452622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.452650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.452817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.452845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.453027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.453056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.453223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.453251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.453366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.453395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.453585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.453613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.453829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.453857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.453985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.454013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.454264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.454292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.454509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.454538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.454710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.454739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.454947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.454975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.963 qpair failed and we were unable to recover it. 00:27:48.963 [2024-07-15 18:42:34.455116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.963 [2024-07-15 18:42:34.455144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 18:42:34.455380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 18:42:34.455409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 18:42:34.455586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 18:42:34.455614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 18:42:34.455848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 18:42:34.455877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 18:42:34.456079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 18:42:34.456107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 18:42:34.456298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 18:42:34.456327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 18:42:34.456463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 18:42:34.456491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 18:42:34.456666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 18:42:34.456700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 18:42:34.456873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 18:42:34.456901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 18:42:34.457042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 18:42:34.457071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 18:42:34.457265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 18:42:34.457293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 18:42:34.457543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 18:42:34.457572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 18:42:34.457848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 18:42:34.457876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 18:42:34.458110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 18:42:34.458139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 18:42:34.458374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 18:42:34.458405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 18:42:34.458518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 18:42:34.458548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 18:42:34.458674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 18:42:34.458703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 18:42:34.458869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 18:42:34.458908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 18:42:34.459168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 18:42:34.459197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 18:42:34.459383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 18:42:34.459414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 18:42:34.459534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 18:42:34.459564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 18:42:34.459768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 18:42:34.459803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 18:42:34.459926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 18:42:34.459957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 18:42:34.460151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 18:42:34.460180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 18:42:34.460283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 18:42:34.460313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 18:42:34.460451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 18:42:34.460481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 18:42:34.460593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 18:42:34.460623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 18:42:34.460880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 18:42:34.460910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:48.964 [2024-07-15 18:42:34.461032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.964 [2024-07-15 18:42:34.461062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:48.964 qpair failed and we were unable to recover it. 00:27:49.234 [2024-07-15 18:42:34.461164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.234 [2024-07-15 18:42:34.461194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.234 qpair failed and we were unable to recover it. 00:27:49.234 [2024-07-15 18:42:34.461390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.234 [2024-07-15 18:42:34.461421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.234 qpair failed and we were unable to recover it. 00:27:49.234 [2024-07-15 18:42:34.461613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.234 [2024-07-15 18:42:34.461654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.234 qpair failed and we were unable to recover it. 00:27:49.234 [2024-07-15 18:42:34.461928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.234 [2024-07-15 18:42:34.461966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.234 qpair failed and we were unable to recover it. 00:27:49.234 [2024-07-15 18:42:34.462256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.234 [2024-07-15 18:42:34.462294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.234 qpair failed and we were unable to recover it. 00:27:49.234 [2024-07-15 18:42:34.462482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.234 [2024-07-15 18:42:34.462537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.234 qpair failed and we were unable to recover it. 00:27:49.234 [2024-07-15 18:42:34.462736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.234 [2024-07-15 18:42:34.462776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.234 qpair failed and we were unable to recover it. 00:27:49.234 [2024-07-15 18:42:34.462978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.234 [2024-07-15 18:42:34.463020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.234 qpair failed and we were unable to recover it. 00:27:49.234 [2024-07-15 18:42:34.463179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.234 [2024-07-15 18:42:34.463219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.234 qpair failed and we were unable to recover it. 00:27:49.234 [2024-07-15 18:42:34.463394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.234 [2024-07-15 18:42:34.463425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.234 qpair failed and we were unable to recover it. 00:27:49.234 [2024-07-15 18:42:34.463662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.234 [2024-07-15 18:42:34.463691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.234 qpair failed and we were unable to recover it. 00:27:49.234 [2024-07-15 18:42:34.463817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.234 [2024-07-15 18:42:34.463846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.234 qpair failed and we were unable to recover it. 00:27:49.234 [2024-07-15 18:42:34.463966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.234 [2024-07-15 18:42:34.463995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.234 qpair failed and we were unable to recover it. 00:27:49.234 [2024-07-15 18:42:34.464184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.234 [2024-07-15 18:42:34.464213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.234 qpair failed and we were unable to recover it. 00:27:49.234 [2024-07-15 18:42:34.464383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.234 [2024-07-15 18:42:34.464414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.234 qpair failed and we were unable to recover it. 00:27:49.234 [2024-07-15 18:42:34.464552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.234 [2024-07-15 18:42:34.464583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.234 qpair failed and we were unable to recover it. 00:27:49.234 [2024-07-15 18:42:34.464709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.234 [2024-07-15 18:42:34.464738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.234 qpair failed and we were unable to recover it. 00:27:49.234 [2024-07-15 18:42:34.464939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.234 [2024-07-15 18:42:34.464969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.234 qpair failed and we were unable to recover it. 00:27:49.234 [2024-07-15 18:42:34.465078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.234 [2024-07-15 18:42:34.465116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.234 qpair failed and we were unable to recover it. 00:27:49.234 [2024-07-15 18:42:34.465252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.234 [2024-07-15 18:42:34.465282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.234 qpair failed and we were unable to recover it. 00:27:49.234 [2024-07-15 18:42:34.465469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.234 [2024-07-15 18:42:34.465500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.234 qpair failed and we were unable to recover it. 00:27:49.234 [2024-07-15 18:42:34.465669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.234 [2024-07-15 18:42:34.465699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.234 qpair failed and we were unable to recover it. 00:27:49.234 [2024-07-15 18:42:34.465871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.234 [2024-07-15 18:42:34.465901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.234 qpair failed and we were unable to recover it. 00:27:49.234 [2024-07-15 18:42:34.466134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.466164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.466333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.466377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.466477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.466527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.466718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.466748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.466941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.466971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.467090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.467119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.467230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.467259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.467447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.467478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.467598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.467627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.467767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.467814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.468061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.468092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.468195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.468225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.468324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.468378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.468488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.468518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.468705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.468735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.468922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.468952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.469119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.469149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.469359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.469391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.469666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.469696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.469874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.469904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.470145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.470175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.470366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.470397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.470600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.470642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.470817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.470847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.471087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.471117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.471385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.471416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.471599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.471629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.471797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.471828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.472016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.472045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.472161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.472191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.472386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.472417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.472596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.472626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.472796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.472825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.473015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.473044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.473223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.473254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.473515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.473545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.473720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.473750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.473955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.235 [2024-07-15 18:42:34.473986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.235 qpair failed and we were unable to recover it. 00:27:49.235 [2024-07-15 18:42:34.474158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.474188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.474435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.474467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.474655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.474686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.474812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.474842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.475030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.475060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.475236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.475266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.475399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.475431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.475637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.475667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.475765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.475795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.475909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.475940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.476117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.476147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.476281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.476352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.476565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.476600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.476790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.476823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.477016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.477049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.477302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.477355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.477539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.477571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.477850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.477888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.478009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.478040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.478286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.478318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.478605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.478639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.478824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.478857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.478987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.479024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.479271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.479304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.479499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.479556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.479758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.479788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.479904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.479934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.480118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.480147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.480447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.480481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.480729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.480760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.480944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.480974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.481194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.481224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.481488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.481522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.481639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.481669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.481846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.481876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.482001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.482031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.482267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.482297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.482486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.482518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.482652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.482685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.236 [2024-07-15 18:42:34.482873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.236 [2024-07-15 18:42:34.482903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.236 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.483071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.483102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.483232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.483262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.483511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.483542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.483655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.483685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.483960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.483990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.484179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.484208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.484312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.484352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.484534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.484564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.484680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.484709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.484954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.484984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.485152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.485182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.485361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.485397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.485584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.485613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.485732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.485762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.485980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.486010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.486250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.486280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.486465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.486497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.486678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.486708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.486880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.486910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.487082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.487111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.487298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.487328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.487610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.487645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.487812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.487841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.488018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.488047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.488226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.488256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.488440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.488471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.488710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.488739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.488908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.488938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.489119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.489149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.489328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.489366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.489572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.489602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.489862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.489892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.490024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.490053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.490234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.490264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.490449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.490479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.490739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.490769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.490940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.490970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.491190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.491220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.491416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.491455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.237 [2024-07-15 18:42:34.491696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.237 [2024-07-15 18:42:34.491726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.237 qpair failed and we were unable to recover it. 00:27:49.238 [2024-07-15 18:42:34.491894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.238 [2024-07-15 18:42:34.491924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.238 qpair failed and we were unable to recover it. 00:27:49.238 [2024-07-15 18:42:34.492093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.238 [2024-07-15 18:42:34.492121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.238 qpair failed and we were unable to recover it. 00:27:49.238 [2024-07-15 18:42:34.492308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.238 [2024-07-15 18:42:34.492349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.238 qpair failed and we were unable to recover it. 00:27:49.238 [2024-07-15 18:42:34.492538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.238 [2024-07-15 18:42:34.492569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.238 qpair failed and we were unable to recover it. 00:27:49.238 [2024-07-15 18:42:34.492753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.238 [2024-07-15 18:42:34.492782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.238 qpair failed and we were unable to recover it. 00:27:49.238 [2024-07-15 18:42:34.492958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.238 [2024-07-15 18:42:34.492987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.238 qpair failed and we were unable to recover it. 00:27:49.238 [2024-07-15 18:42:34.493220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.238 [2024-07-15 18:42:34.493249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.238 qpair failed and we were unable to recover it. 00:27:49.238 [2024-07-15 18:42:34.493373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.238 [2024-07-15 18:42:34.493403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.238 qpair failed and we were unable to recover it. 00:27:49.238 [2024-07-15 18:42:34.493648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.238 [2024-07-15 18:42:34.493676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.238 qpair failed and we were unable to recover it. 00:27:49.238 [2024-07-15 18:42:34.493789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.238 [2024-07-15 18:42:34.493819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.238 qpair failed and we were unable to recover it. 00:27:49.238 [2024-07-15 18:42:34.494019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.238 [2024-07-15 18:42:34.494047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.238 qpair failed and we were unable to recover it. 00:27:49.238 [2024-07-15 18:42:34.494218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.238 [2024-07-15 18:42:34.494256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.238 qpair failed and we were unable to recover it. 00:27:49.238 [2024-07-15 18:42:34.494517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.238 [2024-07-15 18:42:34.494546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.238 qpair failed and we were unable to recover it. 00:27:49.238 [2024-07-15 18:42:34.494807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.238 [2024-07-15 18:42:34.494836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.238 qpair failed and we were unable to recover it. 00:27:49.238 [2024-07-15 18:42:34.494988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.238 [2024-07-15 18:42:34.495017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.238 qpair failed and we were unable to recover it. 00:27:49.238 [2024-07-15 18:42:34.495253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.238 [2024-07-15 18:42:34.495281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.238 qpair failed and we were unable to recover it. 00:27:49.238 [2024-07-15 18:42:34.495460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.238 [2024-07-15 18:42:34.495491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.238 qpair failed and we were unable to recover it. 00:27:49.238 [2024-07-15 18:42:34.495689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.238 [2024-07-15 18:42:34.495718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.238 qpair failed and we were unable to recover it. 00:27:49.238 [2024-07-15 18:42:34.495927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.238 [2024-07-15 18:42:34.495956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.238 qpair failed and we were unable to recover it. 00:27:49.238 [2024-07-15 18:42:34.496092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.238 [2024-07-15 18:42:34.496122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.238 qpair failed and we were unable to recover it. 00:27:49.238 [2024-07-15 18:42:34.496315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.238 [2024-07-15 18:42:34.496353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.238 qpair failed and we were unable to recover it. 00:27:49.238 [2024-07-15 18:42:34.496532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.238 [2024-07-15 18:42:34.496562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.238 qpair failed and we were unable to recover it. 00:27:49.238 [2024-07-15 18:42:34.496790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.238 [2024-07-15 18:42:34.496820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.238 qpair failed and we were unable to recover it. 00:27:49.238 [2024-07-15 18:42:34.497078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.238 [2024-07-15 18:42:34.497108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.238 qpair failed and we were unable to recover it. 00:27:49.238 [2024-07-15 18:42:34.497273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.238 [2024-07-15 18:42:34.497303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.238 qpair failed and we were unable to recover it. 00:27:49.238 [2024-07-15 18:42:34.497444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.238 [2024-07-15 18:42:34.497477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.238 qpair failed and we were unable to recover it. 00:27:49.238 [2024-07-15 18:42:34.497665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.238 [2024-07-15 18:42:34.497703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.238 qpair failed and we were unable to recover it. 00:27:49.238 [2024-07-15 18:42:34.497824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.238 [2024-07-15 18:42:34.497854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.238 qpair failed and we were unable to recover it. 00:27:49.238 [2024-07-15 18:42:34.497984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.239 [2024-07-15 18:42:34.498014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.239 qpair failed and we were unable to recover it. 00:27:49.239 [2024-07-15 18:42:34.498250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.239 [2024-07-15 18:42:34.498279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.239 qpair failed and we were unable to recover it. 00:27:49.239 [2024-07-15 18:42:34.498467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.239 [2024-07-15 18:42:34.498498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.239 qpair failed and we were unable to recover it. 00:27:49.239 [2024-07-15 18:42:34.498678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.239 [2024-07-15 18:42:34.498707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.239 qpair failed and we were unable to recover it. 00:27:49.239 [2024-07-15 18:42:34.498815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.239 [2024-07-15 18:42:34.498844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.239 qpair failed and we were unable to recover it. 00:27:49.239 [2024-07-15 18:42:34.499027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.239 [2024-07-15 18:42:34.499057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.239 qpair failed and we were unable to recover it. 00:27:49.239 [2024-07-15 18:42:34.499239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.239 [2024-07-15 18:42:34.499269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.239 qpair failed and we were unable to recover it. 00:27:49.239 [2024-07-15 18:42:34.499477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.239 [2024-07-15 18:42:34.499507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.239 qpair failed and we were unable to recover it. 00:27:49.239 [2024-07-15 18:42:34.499644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.239 [2024-07-15 18:42:34.499675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.239 qpair failed and we were unable to recover it. 00:27:49.239 [2024-07-15 18:42:34.499901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.239 [2024-07-15 18:42:34.499929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.239 qpair failed and we were unable to recover it. 00:27:49.239 [2024-07-15 18:42:34.500123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.239 [2024-07-15 18:42:34.500157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.239 qpair failed and we were unable to recover it. 00:27:49.239 [2024-07-15 18:42:34.500425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.239 [2024-07-15 18:42:34.500456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.239 qpair failed and we were unable to recover it. 00:27:49.239 [2024-07-15 18:42:34.500641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.239 [2024-07-15 18:42:34.500671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.239 qpair failed and we were unable to recover it. 00:27:49.239 [2024-07-15 18:42:34.500937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.239 [2024-07-15 18:42:34.500966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.239 qpair failed and we were unable to recover it. 00:27:49.239 [2024-07-15 18:42:34.501157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.239 [2024-07-15 18:42:34.501188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.239 qpair failed and we were unable to recover it. 00:27:49.239 [2024-07-15 18:42:34.501372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.239 [2024-07-15 18:42:34.501403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.239 qpair failed and we were unable to recover it. 00:27:49.239 [2024-07-15 18:42:34.501523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.239 [2024-07-15 18:42:34.501553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.501741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.501770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.501947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.501977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.502160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.502189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.502371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.502402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.502530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.502559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.502772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.502802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.502985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.503015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.503154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.503184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.503386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.503416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.503622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.503651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.503889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.503918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.504153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.504182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.504379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.504409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.504541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.504571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.504807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.504836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.505074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.505103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.505274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.505304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.505547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.505577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.505809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.505838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.506018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.506048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.506172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.506202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.506402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.506433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.506670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.506699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.506906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.506936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.507054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.507083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.507268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.507297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.507563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.507594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.507764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.507793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.507960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.507989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.508234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.508264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.508375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.508406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.508539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.508569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.508790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.508820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.509007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.509042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.509212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.509241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.509419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.509450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.509577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.509606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.509704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.509734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.509848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.240 [2024-07-15 18:42:34.509877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.240 qpair failed and we were unable to recover it. 00:27:49.240 [2024-07-15 18:42:34.510085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.510115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 [2024-07-15 18:42:34.510316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.510353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 [2024-07-15 18:42:34.510538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.510567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 [2024-07-15 18:42:34.510768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.510798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 [2024-07-15 18:42:34.510964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.510994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 [2024-07-15 18:42:34.511112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.511141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 [2024-07-15 18:42:34.511428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.511458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 [2024-07-15 18:42:34.511578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.511607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 [2024-07-15 18:42:34.511745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.511775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 [2024-07-15 18:42:34.512009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.512038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 [2024-07-15 18:42:34.512148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.512177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 [2024-07-15 18:42:34.512364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.512395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 [2024-07-15 18:42:34.512632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.512661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 [2024-07-15 18:42:34.512909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.512938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 [2024-07-15 18:42:34.513162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.513191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 [2024-07-15 18:42:34.513380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.513410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 [2024-07-15 18:42:34.513530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.513560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 [2024-07-15 18:42:34.513788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.513818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 [2024-07-15 18:42:34.514002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.514031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 [2024-07-15 18:42:34.514145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.514174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 [2024-07-15 18:42:34.514345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.514376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 18:42:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:49.241 [2024-07-15 18:42:34.514481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.514512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 [2024-07-15 18:42:34.514644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.514674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 18:42:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:49.241 [2024-07-15 18:42:34.514806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.514836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 [2024-07-15 18:42:34.514970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.514999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 18:42:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:49.241 [2024-07-15 18:42:34.515161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.515190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 [2024-07-15 18:42:34.515307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.515354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 18:42:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:49.241 [2024-07-15 18:42:34.515476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.515506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 18:42:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:49.241 [2024-07-15 18:42:34.515681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.515713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 [2024-07-15 18:42:34.515832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.515861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 [2024-07-15 18:42:34.516032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.516061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 [2024-07-15 18:42:34.516227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.516256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 [2024-07-15 18:42:34.516438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.516474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 [2024-07-15 18:42:34.516663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.241 [2024-07-15 18:42:34.516693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.241 qpair failed and we were unable to recover it. 00:27:49.241 [2024-07-15 18:42:34.516880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.516911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.517095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.517124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.517251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.517280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.517531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.517561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.517795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.517824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.518058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.518087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.518268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.518297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.518420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.518451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.518586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.518615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.518798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.518828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.519044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.519074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.519343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.519374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.519641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.519673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.519919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.519949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.520077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.520107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.520319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.520358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.520549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.520579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.520693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.520722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.520991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.521020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.521138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.521167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.521278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.521307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.521443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.521478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.521611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.521641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.521833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.521863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.521974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.522004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.522157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.522188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.522300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.522329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.522462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.522492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.522706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.522740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.522918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.522949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.523076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.523105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.523278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.523307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.523591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.523631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.523832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.523865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.523987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.524016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.524183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.524212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.524461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.524491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.524620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.524650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.524833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.524868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.242 [2024-07-15 18:42:34.525058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.242 [2024-07-15 18:42:34.525088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.242 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.525206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.525236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.525424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.525455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.525583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.525613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.525745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.525774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.525886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.525916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.526098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.526128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.526460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.526490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.526618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.526650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.526789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.526818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.526928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.526957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.527062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.527091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.527327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.527367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.527495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.527526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.527734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.527764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.527936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.527965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.528081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.528110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.528357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.528387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.528501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.528531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.528663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.528693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.528864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.528894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.529006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.529036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.529162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.529192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.529319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.529378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.529557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.529589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.529756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.529787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.529920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.529951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.530217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.530247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.530412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.530443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.530562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.530592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.530800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.530830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.531006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.531036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.531202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.531232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.531428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.531458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.531643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.531672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.531772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.531801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.243 qpair failed and we were unable to recover it. 00:27:49.243 [2024-07-15 18:42:34.531914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.243 [2024-07-15 18:42:34.531944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.532180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.532209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.532444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.532474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.532669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.532704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.532883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.532913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.533040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.533070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.533240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.533270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.533456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.533487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.533683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.533713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.533829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.533860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.534098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.534129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.534316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.534356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.534467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.534497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.534601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.534631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.534747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.534776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.534883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.534912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.535091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.535120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.535235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.535265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.535433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.535464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.535644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.535673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.535776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.535805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.535967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.535997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.536236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.536266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.536383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.536413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.536592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.536621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.536808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.536838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.536948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.536977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.537076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.537105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.537275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.537305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.537446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.537477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.537657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.537693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.537805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.537835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.537953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.537984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.244 [2024-07-15 18:42:34.538159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.244 [2024-07-15 18:42:34.538189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.244 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.538376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.538407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.538522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.538553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.538676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.538706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.538897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.538928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.539036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.539066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.539166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.539197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.539311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.539349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.539460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.539490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.539591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.539621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.539786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.539815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.539962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.539992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.540176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.540206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.540416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.540446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.540546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.540576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.540677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.540707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.540805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.540835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.540939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.540969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.541086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.541115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.541357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.541389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.541510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.541540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.541654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.541684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.541804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.541834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.541939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.541969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.542142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.542177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.542358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.542390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.542509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.542539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.542636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.542665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.542855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.542885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.543051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.543081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.543212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.543242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.543357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.543388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.543566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.543596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.245 [2024-07-15 18:42:34.543714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.245 [2024-07-15 18:42:34.543745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.245 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.543857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.543887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.543995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.544025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.544208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.544238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.544347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.544388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.544502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.544533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.544647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.544677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.544913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.544942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.545115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.545145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.545309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.545349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.545454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.545484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.545664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.545694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.545806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.545836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.545941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.545971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.546072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.546102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.546263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.546293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.546488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.546518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.546723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.546753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.546870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.546905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.547026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.547056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.547242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.547271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.547375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.547404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.547520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.547550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.547658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.547688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 18:42:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:49.246 [2024-07-15 18:42:34.547799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.547838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.547935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.547965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.548151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.548182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with 18:42:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:49.246 addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.548379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.548409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.548538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 18:42:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.246 [2024-07-15 18:42:34.548570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.548757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.548788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 18:42:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:49.246 [2024-07-15 18:42:34.548900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.548936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.549116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.549146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.549275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.549305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.549420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.549450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.549552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.549582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.549692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.549722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.549822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.549851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.550021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.550051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.246 [2024-07-15 18:42:34.550224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.246 [2024-07-15 18:42:34.550253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.246 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.550387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.550417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.550527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.550557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.550659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.550688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.550803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.550833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.550939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.550969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.551076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.551106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.551234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.551264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.551431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.551461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.551696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.551726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.551831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.551861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.552031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.552060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.552229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.552260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.552446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.552476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.552583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.552613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.552714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.552744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.552871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.552900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.553078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.553108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.553224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.553254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.553361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.553397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.553565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.553595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.553694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.553723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.553839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.553868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.553972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.554001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.554194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.554223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.554326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.554363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.554473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.554502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.554601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.554631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.554893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.554923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.555030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.555059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.555226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.555256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.555465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.555496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.555695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.555724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.555852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.555889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.556014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.556044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.556161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.247 [2024-07-15 18:42:34.556191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.247 qpair failed and we were unable to recover it. 00:27:49.247 [2024-07-15 18:42:34.556309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.248 [2024-07-15 18:42:34.556348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.248 qpair failed and we were unable to recover it. 00:27:49.248 [2024-07-15 18:42:34.556459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.248 [2024-07-15 18:42:34.556489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.248 qpair failed and we were unable to recover it. 00:27:49.248 [2024-07-15 18:42:34.556676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.248 [2024-07-15 18:42:34.556705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.248 qpair failed and we were unable to recover it. 00:27:49.248 [2024-07-15 18:42:34.556893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.248 [2024-07-15 18:42:34.556923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.248 qpair failed and we were unable to recover it. 00:27:49.248 [2024-07-15 18:42:34.557040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.248 [2024-07-15 18:42:34.557069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.248 qpair failed and we were unable to recover it. 00:27:49.248 [2024-07-15 18:42:34.557177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.248 [2024-07-15 18:42:34.557206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.248 qpair failed and we were unable to recover it. 00:27:49.248 [2024-07-15 18:42:34.557305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.248 [2024-07-15 18:42:34.557335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.248 qpair failed and we were unable to recover it. 00:27:49.248 [2024-07-15 18:42:34.557465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.248 [2024-07-15 18:42:34.557496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.248 qpair failed and we were unable to recover it. 00:27:49.248 [2024-07-15 18:42:34.557655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.248 [2024-07-15 18:42:34.557684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.248 qpair failed and we were unable to recover it. 00:27:49.248 [2024-07-15 18:42:34.557933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.248 [2024-07-15 18:42:34.557962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.248 qpair failed and we were unable to recover it. 00:27:49.248 [2024-07-15 18:42:34.558145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.248 [2024-07-15 18:42:34.558181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.248 qpair failed and we were unable to recover it. 00:27:49.248 [2024-07-15 18:42:34.558302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.248 [2024-07-15 18:42:34.558332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.248 qpair failed and we were unable to recover it. 00:27:49.248 [2024-07-15 18:42:34.558537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.248 [2024-07-15 18:42:34.558567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.248 qpair failed and we were unable to recover it. 00:27:49.248 [2024-07-15 18:42:34.558746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.248 [2024-07-15 18:42:34.558776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.248 qpair failed and we were unable to recover it. 00:27:49.248 [2024-07-15 18:42:34.558881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.248 [2024-07-15 18:42:34.558911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.248 qpair failed and we were unable to recover it. 00:27:49.248 [2024-07-15 18:42:34.559024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.248 [2024-07-15 18:42:34.559053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.248 qpair failed and we were unable to recover it. 00:27:49.248 [2024-07-15 18:42:34.559253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.248 [2024-07-15 18:42:34.559282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.248 qpair failed and we were unable to recover it. 00:27:49.248 [2024-07-15 18:42:34.559530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.248 [2024-07-15 18:42:34.559562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.248 qpair failed and we were unable to recover it. 00:27:49.248 [2024-07-15 18:42:34.559682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.248 [2024-07-15 18:42:34.559711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.248 qpair failed and we were unable to recover it. 00:27:49.248 [2024-07-15 18:42:34.559897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.248 [2024-07-15 18:42:34.559927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.248 qpair failed and we were unable to recover it. 00:27:49.248 [2024-07-15 18:42:34.560101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.248 [2024-07-15 18:42:34.560130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.248 qpair failed and we were unable to recover it. 00:27:49.248 [2024-07-15 18:42:34.560297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.248 [2024-07-15 18:42:34.560326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.248 qpair failed and we were unable to recover it. 00:27:49.248 [2024-07-15 18:42:34.560497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.248 [2024-07-15 18:42:34.560528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.248 qpair failed and we were unable to recover it. 00:27:49.248 [2024-07-15 18:42:34.560731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.248 [2024-07-15 18:42:34.560761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.248 qpair failed and we were unable to recover it. 00:27:49.248 [2024-07-15 18:42:34.560955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.248 [2024-07-15 18:42:34.560987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.248 qpair failed and we were unable to recover it. 00:27:49.248 [2024-07-15 18:42:34.561093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.248 [2024-07-15 18:42:34.561122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.248 qpair failed and we were unable to recover it. 00:27:49.248 [2024-07-15 18:42:34.561292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.248 [2024-07-15 18:42:34.561322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.248 qpair failed and we were unable to recover it. 00:27:49.248 [2024-07-15 18:42:34.561524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.248 [2024-07-15 18:42:34.561556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.248 qpair failed and we were unable to recover it. 00:27:49.248 [2024-07-15 18:42:34.561670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.248 [2024-07-15 18:42:34.561700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.248 qpair failed and we were unable to recover it. 00:27:49.248 [2024-07-15 18:42:34.561888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.248 [2024-07-15 18:42:34.561919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.248 qpair failed and we were unable to recover it. 00:27:49.248 [2024-07-15 18:42:34.562033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.248 [2024-07-15 18:42:34.562063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.248 qpair failed and we were unable to recover it. 00:27:49.248 [2024-07-15 18:42:34.562307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.248 [2024-07-15 18:42:34.562347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.248 qpair failed and we were unable to recover it. 00:27:49.248 [2024-07-15 18:42:34.562460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 [2024-07-15 18:42:34.562490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.249 [2024-07-15 18:42:34.562620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 [2024-07-15 18:42:34.562650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.249 [2024-07-15 18:42:34.562768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 [2024-07-15 18:42:34.562800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.249 [2024-07-15 18:42:34.562909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 [2024-07-15 18:42:34.562938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.249 [2024-07-15 18:42:34.563124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 [2024-07-15 18:42:34.563153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.249 [2024-07-15 18:42:34.563285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 [2024-07-15 18:42:34.563329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.249 [2024-07-15 18:42:34.563515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 [2024-07-15 18:42:34.563546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.249 [2024-07-15 18:42:34.563733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 [2024-07-15 18:42:34.563764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.249 [2024-07-15 18:42:34.563947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 [2024-07-15 18:42:34.563978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.249 [2024-07-15 18:42:34.564154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 [2024-07-15 18:42:34.564185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.249 [2024-07-15 18:42:34.564473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 [2024-07-15 18:42:34.564506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.249 [2024-07-15 18:42:34.564750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 [2024-07-15 18:42:34.564782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.249 [2024-07-15 18:42:34.564902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 [2024-07-15 18:42:34.564933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.249 [2024-07-15 18:42:34.565114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 [2024-07-15 18:42:34.565145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.249 [2024-07-15 18:42:34.565386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 [2024-07-15 18:42:34.565418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.249 [2024-07-15 18:42:34.565600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 [2024-07-15 18:42:34.565631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.249 [2024-07-15 18:42:34.565802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 [2024-07-15 18:42:34.565832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.249 [2024-07-15 18:42:34.565937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 [2024-07-15 18:42:34.565967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.249 [2024-07-15 18:42:34.566077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 [2024-07-15 18:42:34.566113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.249 [2024-07-15 18:42:34.566295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 [2024-07-15 18:42:34.566325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.249 [2024-07-15 18:42:34.566473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 [2024-07-15 18:42:34.566503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.249 [2024-07-15 18:42:34.566615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 [2024-07-15 18:42:34.566645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.249 [2024-07-15 18:42:34.566840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 [2024-07-15 18:42:34.566870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.249 [2024-07-15 18:42:34.566984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 [2024-07-15 18:42:34.567014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.249 [2024-07-15 18:42:34.567224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 [2024-07-15 18:42:34.567254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.249 Malloc0 00:27:49.249 [2024-07-15 18:42:34.567515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 [2024-07-15 18:42:34.567546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.249 [2024-07-15 18:42:34.567724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 [2024-07-15 18:42:34.567753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.249 [2024-07-15 18:42:34.567858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 [2024-07-15 18:42:34.567887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.249 18:42:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.249 [2024-07-15 18:42:34.568067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 [2024-07-15 18:42:34.568096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.249 [2024-07-15 18:42:34.568311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 18:42:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:49.249 [2024-07-15 18:42:34.568360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.249 [2024-07-15 18:42:34.568595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 [2024-07-15 18:42:34.568625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.249 18:42:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.249 [2024-07-15 18:42:34.568744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 [2024-07-15 18:42:34.568774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.249 18:42:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:49.249 [2024-07-15 18:42:34.569034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.249 [2024-07-15 18:42:34.569065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.249 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.569171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.569201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.569326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.569367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.569553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.569583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.569767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.569797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.569983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.570013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.570192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.570222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.570387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.570417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.570656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.570686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.570865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.570895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.570992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.571022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.571207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.571243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.571505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.571536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.571797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.571826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.571995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.572024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.572263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.572293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.572472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.572502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.572670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.572700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.572879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.572909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.573112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.573142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.573322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.573365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.573477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.573506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.573768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.573797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.573967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.573997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.574166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.574195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.574397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.574428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.574686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.574716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.574878] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:49.250 [2024-07-15 18:42:34.574898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.574927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.575106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.575135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.575269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.575299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.575542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.575572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.575815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.575844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.575973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.576003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.576123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.576153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.576367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.576398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.576589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.576619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.576758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.250 [2024-07-15 18:42:34.576787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.250 qpair failed and we were unable to recover it. 00:27:49.250 [2024-07-15 18:42:34.577055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.577085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 [2024-07-15 18:42:34.577224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.577254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 [2024-07-15 18:42:34.577431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.577462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 [2024-07-15 18:42:34.577639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.577668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 [2024-07-15 18:42:34.577800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.577830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 [2024-07-15 18:42:34.578008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.578038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 [2024-07-15 18:42:34.578297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.578326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 [2024-07-15 18:42:34.578444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.578474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 [2024-07-15 18:42:34.578665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.578694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 [2024-07-15 18:42:34.578887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.578916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 [2024-07-15 18:42:34.579086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.579115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 [2024-07-15 18:42:34.579297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.579327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 [2024-07-15 18:42:34.579510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.579541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 [2024-07-15 18:42:34.579648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.579677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 [2024-07-15 18:42:34.579952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.580012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x501ed0 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 [2024-07-15 18:42:34.580161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.580197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb800000b90 with addr=10.0.0.2, port=4420 00:27:49.251 18:42:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 [2024-07-15 18:42:34.580415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.580459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 18:42:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:49.251 [2024-07-15 18:42:34.580648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.580680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 18:42:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.251 [2024-07-15 18:42:34.580914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.580945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 18:42:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:49.251 [2024-07-15 18:42:34.581218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.581249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 [2024-07-15 18:42:34.581415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.581447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 [2024-07-15 18:42:34.581570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.581600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 [2024-07-15 18:42:34.581727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.581757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 [2024-07-15 18:42:34.581861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.581892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 [2024-07-15 18:42:34.582057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.582086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 [2024-07-15 18:42:34.582322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.582364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 [2024-07-15 18:42:34.582548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.582578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 [2024-07-15 18:42:34.582739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.582767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 [2024-07-15 18:42:34.582935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.582965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 [2024-07-15 18:42:34.583142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.583171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 [2024-07-15 18:42:34.583356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.583387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 [2024-07-15 18:42:34.583639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.583668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 [2024-07-15 18:42:34.583854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.583884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 [2024-07-15 18:42:34.584144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.251 [2024-07-15 18:42:34.584172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.251 qpair failed and we were unable to recover it. 00:27:49.251 [2024-07-15 18:42:34.584405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.584435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.584632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.584663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.584846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.584875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.585068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.585097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.585279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.585308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.585447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.585482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.585657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.585687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.585923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.585953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.586134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.586164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.586363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.586394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.586674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.586704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.586905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.586935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.587204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.587233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.587403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.587434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.587710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.587740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.587929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.587959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.588201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.588231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.588466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.588498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.588692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.588727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.588932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.588962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.589074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.589104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.589240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.589269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.589528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.589559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.589798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.589828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.590013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.590043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.590228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.590259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.590457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.590487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.590654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.590684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.590821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.590851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.591019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.591049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.591180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.591210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.591426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.591460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.591731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.591761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.591870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.591900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.592010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.592040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.592156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.592187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 18:42:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.252 [2024-07-15 18:42:34.592358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.592389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.592563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 18:42:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:49.252 [2024-07-15 18:42:34.592593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.252 [2024-07-15 18:42:34.592724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.252 [2024-07-15 18:42:34.592754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.252 qpair failed and we were unable to recover it. 00:27:49.253 18:42:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.253 [2024-07-15 18:42:34.593025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.593055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.253 qpair failed and we were unable to recover it. 00:27:49.253 18:42:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:49.253 [2024-07-15 18:42:34.593225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.593255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.253 qpair failed and we were unable to recover it. 00:27:49.253 [2024-07-15 18:42:34.593440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.593471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.253 qpair failed and we were unable to recover it. 00:27:49.253 [2024-07-15 18:42:34.593718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.593749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.253 qpair failed and we were unable to recover it. 00:27:49.253 [2024-07-15 18:42:34.593973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.594009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.253 qpair failed and we were unable to recover it. 00:27:49.253 [2024-07-15 18:42:34.594197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.594227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.253 qpair failed and we were unable to recover it. 00:27:49.253 [2024-07-15 18:42:34.594347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.594388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.253 qpair failed and we were unable to recover it. 00:27:49.253 [2024-07-15 18:42:34.594573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.594603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.253 qpair failed and we were unable to recover it. 00:27:49.253 [2024-07-15 18:42:34.594858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.594888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.253 qpair failed and we were unable to recover it. 00:27:49.253 [2024-07-15 18:42:34.595019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.595050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.253 qpair failed and we were unable to recover it. 00:27:49.253 [2024-07-15 18:42:34.595314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.595359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.253 qpair failed and we were unable to recover it. 00:27:49.253 [2024-07-15 18:42:34.595596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.595626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.253 qpair failed and we were unable to recover it. 00:27:49.253 [2024-07-15 18:42:34.595795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.595825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.253 qpair failed and we were unable to recover it. 00:27:49.253 [2024-07-15 18:42:34.596019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.596049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.253 qpair failed and we were unable to recover it. 00:27:49.253 [2024-07-15 18:42:34.596168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.596197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.253 qpair failed and we were unable to recover it. 00:27:49.253 [2024-07-15 18:42:34.596308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.596347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.253 qpair failed and we were unable to recover it. 00:27:49.253 [2024-07-15 18:42:34.596542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.596572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.253 qpair failed and we were unable to recover it. 00:27:49.253 [2024-07-15 18:42:34.596694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.596725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.253 qpair failed and we were unable to recover it. 00:27:49.253 [2024-07-15 18:42:34.596897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.596928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.253 qpair failed and we were unable to recover it. 00:27:49.253 [2024-07-15 18:42:34.597186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.597216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.253 qpair failed and we were unable to recover it. 00:27:49.253 [2024-07-15 18:42:34.597422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.597453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.253 qpair failed and we were unable to recover it. 00:27:49.253 [2024-07-15 18:42:34.597553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.597583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.253 qpair failed and we were unable to recover it. 00:27:49.253 [2024-07-15 18:42:34.597751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.597782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.253 qpair failed and we were unable to recover it. 00:27:49.253 [2024-07-15 18:42:34.598018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.598048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.253 qpair failed and we were unable to recover it. 00:27:49.253 [2024-07-15 18:42:34.598283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.598313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.253 qpair failed and we were unable to recover it. 00:27:49.253 [2024-07-15 18:42:34.598435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.598465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.253 qpair failed and we were unable to recover it. 00:27:49.253 [2024-07-15 18:42:34.598728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.598758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.253 qpair failed and we were unable to recover it. 00:27:49.253 [2024-07-15 18:42:34.598942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.598972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.253 qpair failed and we were unable to recover it. 00:27:49.253 [2024-07-15 18:42:34.599209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.599238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.253 qpair failed and we were unable to recover it. 00:27:49.253 [2024-07-15 18:42:34.599422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.599453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.253 qpair failed and we were unable to recover it. 00:27:49.253 [2024-07-15 18:42:34.599688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.599718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb808000b90 with addr=10.0.0.2, port=4420 00:27:49.253 qpair failed and we were unable to recover it. 00:27:49.253 [2024-07-15 18:42:34.599854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.599887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.253 qpair failed and we were unable to recover it. 00:27:49.253 [2024-07-15 18:42:34.600070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.253 [2024-07-15 18:42:34.600099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.254 qpair failed and we were unable to recover it. 00:27:49.254 [2024-07-15 18:42:34.600221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.254 18:42:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.254 [2024-07-15 18:42:34.600250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.254 qpair failed and we were unable to recover it. 00:27:49.254 [2024-07-15 18:42:34.600460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.254 [2024-07-15 18:42:34.600490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.254 qpair failed and we were unable to recover it. 00:27:49.254 18:42:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:49.254 [2024-07-15 18:42:34.600690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.254 [2024-07-15 18:42:34.600719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.254 qpair failed and we were unable to recover it. 00:27:49.254 [2024-07-15 18:42:34.600925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.254 18:42:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.254 [2024-07-15 18:42:34.600954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.254 qpair failed and we were unable to recover it. 00:27:49.254 18:42:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:49.254 [2024-07-15 18:42:34.601213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.254 [2024-07-15 18:42:34.601243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.254 qpair failed and we were unable to recover it. 00:27:49.254 [2024-07-15 18:42:34.601423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.254 [2024-07-15 18:42:34.601453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.254 qpair failed and we were unable to recover it. 00:27:49.254 [2024-07-15 18:42:34.601578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.254 [2024-07-15 18:42:34.601607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.254 qpair failed and we were unable to recover it. 00:27:49.254 [2024-07-15 18:42:34.601849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.254 [2024-07-15 18:42:34.601879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.254 qpair failed and we were unable to recover it. 00:27:49.254 [2024-07-15 18:42:34.602139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.254 [2024-07-15 18:42:34.602168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.254 qpair failed and we were unable to recover it. 00:27:49.254 [2024-07-15 18:42:34.602345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.254 [2024-07-15 18:42:34.602374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.254 qpair failed and we were unable to recover it. 00:27:49.254 [2024-07-15 18:42:34.602632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.254 [2024-07-15 18:42:34.602662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.254 qpair failed and we were unable to recover it. 00:27:49.254 [2024-07-15 18:42:34.602848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.254 [2024-07-15 18:42:34.602878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.254 qpair failed and we were unable to recover it. 00:27:49.254 [2024-07-15 18:42:34.603052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.254 [2024-07-15 18:42:34.603081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.254 qpair failed and we were unable to recover it. 00:27:49.254 [2024-07-15 18:42:34.603259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.254 [2024-07-15 18:42:34.603288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.254 qpair failed and we were unable to recover it. 00:27:49.254 [2024-07-15 18:42:34.603500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.254 [2024-07-15 18:42:34.603530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.254 qpair failed and we were unable to recover it. 00:27:49.254 [2024-07-15 18:42:34.603714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.254 [2024-07-15 18:42:34.603743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb810000b90 with addr=10.0.0.2, port=4420 00:27:49.254 [2024-07-15 18:42:34.603750] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:49.254 qpair failed and we were unable to recover it. 00:27:49.254 [2024-07-15 18:42:34.605455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.254 [2024-07-15 18:42:34.605576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.254 [2024-07-15 18:42:34.605622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.254 [2024-07-15 18:42:34.605645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.254 [2024-07-15 18:42:34.605666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.254 [2024-07-15 18:42:34.605714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.254 qpair failed and we were unable to recover it. 00:27:49.254 18:42:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.254 18:42:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:49.254 18:42:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.254 18:42:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:49.254 [2024-07-15 18:42:34.615374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.254 [2024-07-15 18:42:34.615469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.254 [2024-07-15 18:42:34.615501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.254 [2024-07-15 18:42:34.615518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.254 [2024-07-15 18:42:34.615538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.254 [2024-07-15 18:42:34.615573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.254 qpair failed and we were unable to recover it. 00:27:49.254 18:42:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.254 18:42:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 4076307 00:27:49.254 [2024-07-15 18:42:34.625454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.254 [2024-07-15 18:42:34.625522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.254 [2024-07-15 18:42:34.625543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.254 [2024-07-15 18:42:34.625553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.254 [2024-07-15 18:42:34.625562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.254 [2024-07-15 18:42:34.625585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.254 qpair failed and we were unable to recover it. 00:27:49.254 [2024-07-15 18:42:34.635349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.254 [2024-07-15 18:42:34.635410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.254 [2024-07-15 18:42:34.635426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.254 [2024-07-15 18:42:34.635433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.254 [2024-07-15 18:42:34.635439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.254 [2024-07-15 18:42:34.635455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.254 qpair failed and we were unable to recover it. 00:27:49.254 [2024-07-15 18:42:34.645431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.254 [2024-07-15 18:42:34.645502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.254 [2024-07-15 18:42:34.645516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.254 [2024-07-15 18:42:34.645523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.254 [2024-07-15 18:42:34.645528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.254 [2024-07-15 18:42:34.645543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.254 qpair failed and we were unable to recover it. 00:27:49.254 [2024-07-15 18:42:34.655431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.254 [2024-07-15 18:42:34.655497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.254 [2024-07-15 18:42:34.655511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.254 [2024-07-15 18:42:34.655517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.254 [2024-07-15 18:42:34.655523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.254 [2024-07-15 18:42:34.655541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.254 qpair failed and we were unable to recover it. 00:27:49.254 [2024-07-15 18:42:34.665462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.254 [2024-07-15 18:42:34.665518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.254 [2024-07-15 18:42:34.665532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.254 [2024-07-15 18:42:34.665538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.255 [2024-07-15 18:42:34.665544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.255 [2024-07-15 18:42:34.665558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.255 qpair failed and we were unable to recover it. 00:27:49.255 [2024-07-15 18:42:34.675421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.255 [2024-07-15 18:42:34.675513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.255 [2024-07-15 18:42:34.675528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.255 [2024-07-15 18:42:34.675534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.255 [2024-07-15 18:42:34.675540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.255 [2024-07-15 18:42:34.675554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.255 qpair failed and we were unable to recover it. 00:27:49.255 [2024-07-15 18:42:34.685498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.255 [2024-07-15 18:42:34.685553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.255 [2024-07-15 18:42:34.685567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.255 [2024-07-15 18:42:34.685573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.255 [2024-07-15 18:42:34.685579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.255 [2024-07-15 18:42:34.685593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.255 qpair failed and we were unable to recover it. 00:27:49.255 [2024-07-15 18:42:34.695544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.255 [2024-07-15 18:42:34.695595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.255 [2024-07-15 18:42:34.695609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.255 [2024-07-15 18:42:34.695615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.255 [2024-07-15 18:42:34.695621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.255 [2024-07-15 18:42:34.695635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.255 qpair failed and we were unable to recover it. 00:27:49.255 [2024-07-15 18:42:34.705603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.255 [2024-07-15 18:42:34.705657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.255 [2024-07-15 18:42:34.705674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.255 [2024-07-15 18:42:34.705681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.255 [2024-07-15 18:42:34.705686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.255 [2024-07-15 18:42:34.705700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.255 qpair failed and we were unable to recover it. 00:27:49.255 [2024-07-15 18:42:34.715553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.255 [2024-07-15 18:42:34.715612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.255 [2024-07-15 18:42:34.715627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.255 [2024-07-15 18:42:34.715633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.255 [2024-07-15 18:42:34.715639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.255 [2024-07-15 18:42:34.715653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.255 qpair failed and we were unable to recover it. 00:27:49.255 [2024-07-15 18:42:34.725614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.255 [2024-07-15 18:42:34.725672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.255 [2024-07-15 18:42:34.725687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.255 [2024-07-15 18:42:34.725694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.255 [2024-07-15 18:42:34.725699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.255 [2024-07-15 18:42:34.725713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.255 qpair failed and we were unable to recover it. 00:27:49.255 [2024-07-15 18:42:34.735646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.255 [2024-07-15 18:42:34.735739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.255 [2024-07-15 18:42:34.735753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.255 [2024-07-15 18:42:34.735760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.255 [2024-07-15 18:42:34.735765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.255 [2024-07-15 18:42:34.735779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.255 qpair failed and we were unable to recover it. 00:27:49.255 [2024-07-15 18:42:34.745683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.255 [2024-07-15 18:42:34.745743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.255 [2024-07-15 18:42:34.745757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.255 [2024-07-15 18:42:34.745763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.255 [2024-07-15 18:42:34.745772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.255 [2024-07-15 18:42:34.745786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.255 qpair failed and we were unable to recover it. 00:27:49.255 [2024-07-15 18:42:34.755689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.255 [2024-07-15 18:42:34.755742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.255 [2024-07-15 18:42:34.755756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.255 [2024-07-15 18:42:34.755763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.255 [2024-07-15 18:42:34.755768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.255 [2024-07-15 18:42:34.755783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.255 qpair failed and we were unable to recover it. 00:27:49.255 [2024-07-15 18:42:34.765654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.255 [2024-07-15 18:42:34.765706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.255 [2024-07-15 18:42:34.765721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.255 [2024-07-15 18:42:34.765728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.255 [2024-07-15 18:42:34.765733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.255 [2024-07-15 18:42:34.765747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.255 qpair failed and we were unable to recover it. 00:27:49.255 [2024-07-15 18:42:34.775694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.255 [2024-07-15 18:42:34.775747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.255 [2024-07-15 18:42:34.775762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.255 [2024-07-15 18:42:34.775768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.255 [2024-07-15 18:42:34.775774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.255 [2024-07-15 18:42:34.775788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.255 qpair failed and we were unable to recover it. 00:27:49.515 [2024-07-15 18:42:34.785781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.515 [2024-07-15 18:42:34.785833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.515 [2024-07-15 18:42:34.785847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.515 [2024-07-15 18:42:34.785853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.515 [2024-07-15 18:42:34.785859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.515 [2024-07-15 18:42:34.785872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.515 qpair failed and we were unable to recover it. 00:27:49.515 [2024-07-15 18:42:34.795787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.515 [2024-07-15 18:42:34.795846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.515 [2024-07-15 18:42:34.795861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.515 [2024-07-15 18:42:34.795867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.515 [2024-07-15 18:42:34.795873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.515 [2024-07-15 18:42:34.795888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.515 qpair failed and we were unable to recover it. 00:27:49.515 [2024-07-15 18:42:34.805840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.515 [2024-07-15 18:42:34.805893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.515 [2024-07-15 18:42:34.805907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.515 [2024-07-15 18:42:34.805913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.515 [2024-07-15 18:42:34.805919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.515 [2024-07-15 18:42:34.805932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.515 qpair failed and we were unable to recover it. 00:27:49.515 [2024-07-15 18:42:34.815865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.515 [2024-07-15 18:42:34.815916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.515 [2024-07-15 18:42:34.815930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.515 [2024-07-15 18:42:34.815936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.515 [2024-07-15 18:42:34.815942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.515 [2024-07-15 18:42:34.815955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.515 qpair failed and we were unable to recover it. 00:27:49.515 [2024-07-15 18:42:34.825909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.515 [2024-07-15 18:42:34.825960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.516 [2024-07-15 18:42:34.825975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.516 [2024-07-15 18:42:34.825981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.516 [2024-07-15 18:42:34.825987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.516 [2024-07-15 18:42:34.826000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.516 qpair failed and we were unable to recover it. 00:27:49.516 [2024-07-15 18:42:34.835915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.516 [2024-07-15 18:42:34.835969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.516 [2024-07-15 18:42:34.835983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.516 [2024-07-15 18:42:34.835990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.516 [2024-07-15 18:42:34.835999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.516 [2024-07-15 18:42:34.836014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.516 qpair failed and we were unable to recover it. 00:27:49.516 [2024-07-15 18:42:34.845949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.516 [2024-07-15 18:42:34.846020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.516 [2024-07-15 18:42:34.846034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.516 [2024-07-15 18:42:34.846041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.516 [2024-07-15 18:42:34.846046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.516 [2024-07-15 18:42:34.846061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.516 qpair failed and we were unable to recover it. 00:27:49.516 [2024-07-15 18:42:34.856136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.516 [2024-07-15 18:42:34.856207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.516 [2024-07-15 18:42:34.856221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.516 [2024-07-15 18:42:34.856227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.516 [2024-07-15 18:42:34.856233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.516 [2024-07-15 18:42:34.856247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.516 qpair failed and we were unable to recover it. 00:27:49.516 [2024-07-15 18:42:34.866057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.516 [2024-07-15 18:42:34.866110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.516 [2024-07-15 18:42:34.866123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.516 [2024-07-15 18:42:34.866130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.516 [2024-07-15 18:42:34.866136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.516 [2024-07-15 18:42:34.866149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.516 qpair failed and we were unable to recover it. 00:27:49.516 [2024-07-15 18:42:34.876091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.516 [2024-07-15 18:42:34.876153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.516 [2024-07-15 18:42:34.876167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.516 [2024-07-15 18:42:34.876173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.516 [2024-07-15 18:42:34.876179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.516 [2024-07-15 18:42:34.876194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.516 qpair failed and we were unable to recover it. 00:27:49.516 [2024-07-15 18:42:34.886137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.516 [2024-07-15 18:42:34.886192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.516 [2024-07-15 18:42:34.886206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.516 [2024-07-15 18:42:34.886213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.516 [2024-07-15 18:42:34.886219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.516 [2024-07-15 18:42:34.886232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.516 qpair failed and we were unable to recover it. 00:27:49.516 [2024-07-15 18:42:34.896123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.516 [2024-07-15 18:42:34.896181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.516 [2024-07-15 18:42:34.896195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.516 [2024-07-15 18:42:34.896201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.516 [2024-07-15 18:42:34.896207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.516 [2024-07-15 18:42:34.896221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.516 qpair failed and we were unable to recover it. 00:27:49.516 [2024-07-15 18:42:34.906130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.516 [2024-07-15 18:42:34.906181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.516 [2024-07-15 18:42:34.906194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.516 [2024-07-15 18:42:34.906201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.516 [2024-07-15 18:42:34.906206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.516 [2024-07-15 18:42:34.906220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.516 qpair failed and we were unable to recover it. 00:27:49.516 [2024-07-15 18:42:34.916159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.516 [2024-07-15 18:42:34.916213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.516 [2024-07-15 18:42:34.916226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.516 [2024-07-15 18:42:34.916233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.516 [2024-07-15 18:42:34.916238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.516 [2024-07-15 18:42:34.916252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.516 qpair failed and we were unable to recover it. 00:27:49.516 [2024-07-15 18:42:34.926161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.516 [2024-07-15 18:42:34.926219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.516 [2024-07-15 18:42:34.926233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.516 [2024-07-15 18:42:34.926242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.516 [2024-07-15 18:42:34.926247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.516 [2024-07-15 18:42:34.926261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.516 qpair failed and we were unable to recover it. 00:27:49.516 [2024-07-15 18:42:34.936179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.516 [2024-07-15 18:42:34.936262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.516 [2024-07-15 18:42:34.936276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.516 [2024-07-15 18:42:34.936282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.516 [2024-07-15 18:42:34.936288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.516 [2024-07-15 18:42:34.936302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.516 qpair failed and we were unable to recover it. 00:27:49.516 [2024-07-15 18:42:34.946231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.516 [2024-07-15 18:42:34.946287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.516 [2024-07-15 18:42:34.946301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.516 [2024-07-15 18:42:34.946307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.516 [2024-07-15 18:42:34.946313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.517 [2024-07-15 18:42:34.946327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.517 qpair failed and we were unable to recover it. 00:27:49.517 [2024-07-15 18:42:34.956262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.517 [2024-07-15 18:42:34.956312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.517 [2024-07-15 18:42:34.956327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.517 [2024-07-15 18:42:34.956333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.517 [2024-07-15 18:42:34.956345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.517 [2024-07-15 18:42:34.956359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.517 qpair failed and we were unable to recover it. 00:27:49.517 [2024-07-15 18:42:34.966287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.517 [2024-07-15 18:42:34.966344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.517 [2024-07-15 18:42:34.966358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.517 [2024-07-15 18:42:34.966364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.517 [2024-07-15 18:42:34.966370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.517 [2024-07-15 18:42:34.966384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.517 qpair failed and we were unable to recover it. 00:27:49.517 [2024-07-15 18:42:34.976333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.517 [2024-07-15 18:42:34.976410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.517 [2024-07-15 18:42:34.976424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.517 [2024-07-15 18:42:34.976431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.517 [2024-07-15 18:42:34.976436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.517 [2024-07-15 18:42:34.976450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.517 qpair failed and we were unable to recover it. 00:27:49.517 [2024-07-15 18:42:34.986335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.517 [2024-07-15 18:42:34.986388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.517 [2024-07-15 18:42:34.986401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.517 [2024-07-15 18:42:34.986408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.517 [2024-07-15 18:42:34.986413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.517 [2024-07-15 18:42:34.986427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.517 qpair failed and we were unable to recover it. 00:27:49.517 [2024-07-15 18:42:34.996353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.517 [2024-07-15 18:42:34.996405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.517 [2024-07-15 18:42:34.996418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.517 [2024-07-15 18:42:34.996425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.517 [2024-07-15 18:42:34.996430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.517 [2024-07-15 18:42:34.996444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.517 qpair failed and we were unable to recover it. 00:27:49.517 [2024-07-15 18:42:35.006417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.517 [2024-07-15 18:42:35.006489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.517 [2024-07-15 18:42:35.006503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.517 [2024-07-15 18:42:35.006509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.517 [2024-07-15 18:42:35.006515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.517 [2024-07-15 18:42:35.006529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.517 qpair failed and we were unable to recover it. 00:27:49.517 [2024-07-15 18:42:35.016421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.517 [2024-07-15 18:42:35.016470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.517 [2024-07-15 18:42:35.016487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.517 [2024-07-15 18:42:35.016493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.517 [2024-07-15 18:42:35.016499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.517 [2024-07-15 18:42:35.016513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.517 qpair failed and we were unable to recover it. 00:27:49.517 [2024-07-15 18:42:35.026499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.517 [2024-07-15 18:42:35.026549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.517 [2024-07-15 18:42:35.026563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.517 [2024-07-15 18:42:35.026569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.517 [2024-07-15 18:42:35.026575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.517 [2024-07-15 18:42:35.026589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.517 qpair failed and we were unable to recover it. 00:27:49.517 [2024-07-15 18:42:35.036516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.517 [2024-07-15 18:42:35.036569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.517 [2024-07-15 18:42:35.036583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.517 [2024-07-15 18:42:35.036590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.517 [2024-07-15 18:42:35.036595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.517 [2024-07-15 18:42:35.036609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.517 qpair failed and we were unable to recover it. 00:27:49.517 [2024-07-15 18:42:35.046547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.517 [2024-07-15 18:42:35.046596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.517 [2024-07-15 18:42:35.046610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.517 [2024-07-15 18:42:35.046616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.517 [2024-07-15 18:42:35.046622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.517 [2024-07-15 18:42:35.046636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.517 qpair failed and we were unable to recover it. 00:27:49.517 [2024-07-15 18:42:35.056558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.517 [2024-07-15 18:42:35.056637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.517 [2024-07-15 18:42:35.056651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.517 [2024-07-15 18:42:35.056657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.517 [2024-07-15 18:42:35.056663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.517 [2024-07-15 18:42:35.056680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.517 qpair failed and we were unable to recover it. 00:27:49.517 [2024-07-15 18:42:35.066594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.517 [2024-07-15 18:42:35.066677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.517 [2024-07-15 18:42:35.066691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.517 [2024-07-15 18:42:35.066698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.517 [2024-07-15 18:42:35.066703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.517 [2024-07-15 18:42:35.066718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.517 qpair failed and we were unable to recover it. 00:27:49.776 [2024-07-15 18:42:35.076598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.776 [2024-07-15 18:42:35.076652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.776 [2024-07-15 18:42:35.076666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.776 [2024-07-15 18:42:35.076672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.776 [2024-07-15 18:42:35.076678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.776 [2024-07-15 18:42:35.076692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.776 qpair failed and we were unable to recover it. 00:27:49.776 [2024-07-15 18:42:35.086598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.776 [2024-07-15 18:42:35.086697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.776 [2024-07-15 18:42:35.086710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.776 [2024-07-15 18:42:35.086717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.776 [2024-07-15 18:42:35.086722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.776 [2024-07-15 18:42:35.086737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.776 qpair failed and we were unable to recover it. 00:27:49.776 [2024-07-15 18:42:35.096577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.776 [2024-07-15 18:42:35.096624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.776 [2024-07-15 18:42:35.096639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.776 [2024-07-15 18:42:35.096645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.776 [2024-07-15 18:42:35.096651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.776 [2024-07-15 18:42:35.096665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.776 qpair failed and we were unable to recover it. 00:27:49.776 [2024-07-15 18:42:35.106685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.776 [2024-07-15 18:42:35.106768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.776 [2024-07-15 18:42:35.106785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.776 [2024-07-15 18:42:35.106792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.776 [2024-07-15 18:42:35.106798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.776 [2024-07-15 18:42:35.106811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.776 qpair failed and we were unable to recover it. 00:27:49.776 [2024-07-15 18:42:35.116668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.776 [2024-07-15 18:42:35.116722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.776 [2024-07-15 18:42:35.116736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.776 [2024-07-15 18:42:35.116742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.776 [2024-07-15 18:42:35.116749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.776 [2024-07-15 18:42:35.116763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.776 qpair failed and we were unable to recover it. 00:27:49.776 [2024-07-15 18:42:35.126713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.776 [2024-07-15 18:42:35.126768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.776 [2024-07-15 18:42:35.126782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.776 [2024-07-15 18:42:35.126789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.776 [2024-07-15 18:42:35.126795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.776 [2024-07-15 18:42:35.126809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.776 qpair failed and we were unable to recover it. 00:27:49.776 [2024-07-15 18:42:35.136740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.776 [2024-07-15 18:42:35.136792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.776 [2024-07-15 18:42:35.136806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.776 [2024-07-15 18:42:35.136813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.776 [2024-07-15 18:42:35.136818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.776 [2024-07-15 18:42:35.136832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.776 qpair failed and we were unable to recover it. 00:27:49.776 [2024-07-15 18:42:35.146730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.776 [2024-07-15 18:42:35.146786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.776 [2024-07-15 18:42:35.146800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.776 [2024-07-15 18:42:35.146806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.776 [2024-07-15 18:42:35.146812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.776 [2024-07-15 18:42:35.146829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.776 qpair failed and we were unable to recover it. 00:27:49.776 [2024-07-15 18:42:35.156823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.776 [2024-07-15 18:42:35.156876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.776 [2024-07-15 18:42:35.156889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.776 [2024-07-15 18:42:35.156895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.776 [2024-07-15 18:42:35.156901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.776 [2024-07-15 18:42:35.156916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.776 qpair failed and we were unable to recover it. 00:27:49.776 [2024-07-15 18:42:35.166827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.776 [2024-07-15 18:42:35.166891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.776 [2024-07-15 18:42:35.166905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.776 [2024-07-15 18:42:35.166912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.776 [2024-07-15 18:42:35.166918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.776 [2024-07-15 18:42:35.166931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.776 qpair failed and we were unable to recover it. 00:27:49.776 [2024-07-15 18:42:35.176860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.776 [2024-07-15 18:42:35.176924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.776 [2024-07-15 18:42:35.176938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.776 [2024-07-15 18:42:35.176944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.776 [2024-07-15 18:42:35.176950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.776 [2024-07-15 18:42:35.176964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.776 qpair failed and we were unable to recover it. 00:27:49.776 [2024-07-15 18:42:35.186903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.776 [2024-07-15 18:42:35.186956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.776 [2024-07-15 18:42:35.186970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.776 [2024-07-15 18:42:35.186977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.776 [2024-07-15 18:42:35.186982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.776 [2024-07-15 18:42:35.186996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.776 qpair failed and we were unable to recover it. 00:27:49.776 [2024-07-15 18:42:35.196914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.776 [2024-07-15 18:42:35.196993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.776 [2024-07-15 18:42:35.197008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.776 [2024-07-15 18:42:35.197015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.776 [2024-07-15 18:42:35.197021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.776 [2024-07-15 18:42:35.197035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.776 qpair failed and we were unable to recover it. 00:27:49.776 [2024-07-15 18:42:35.206940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.776 [2024-07-15 18:42:35.206996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.776 [2024-07-15 18:42:35.207011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.776 [2024-07-15 18:42:35.207017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.776 [2024-07-15 18:42:35.207023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.776 [2024-07-15 18:42:35.207038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.776 qpair failed and we were unable to recover it. 00:27:49.776 [2024-07-15 18:42:35.216987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.776 [2024-07-15 18:42:35.217036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.776 [2024-07-15 18:42:35.217050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.776 [2024-07-15 18:42:35.217056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.776 [2024-07-15 18:42:35.217062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.776 [2024-07-15 18:42:35.217075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.776 qpair failed and we were unable to recover it. 00:27:49.776 [2024-07-15 18:42:35.227005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.776 [2024-07-15 18:42:35.227052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.776 [2024-07-15 18:42:35.227066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.776 [2024-07-15 18:42:35.227072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.776 [2024-07-15 18:42:35.227078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.776 [2024-07-15 18:42:35.227092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.776 qpair failed and we were unable to recover it. 00:27:49.776 [2024-07-15 18:42:35.237048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.776 [2024-07-15 18:42:35.237100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.776 [2024-07-15 18:42:35.237114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.776 [2024-07-15 18:42:35.237120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.776 [2024-07-15 18:42:35.237129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.776 [2024-07-15 18:42:35.237143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.776 qpair failed and we were unable to recover it. 00:27:49.776 [2024-07-15 18:42:35.247081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.776 [2024-07-15 18:42:35.247158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.776 [2024-07-15 18:42:35.247172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.776 [2024-07-15 18:42:35.247178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.776 [2024-07-15 18:42:35.247184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.776 [2024-07-15 18:42:35.247198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.776 qpair failed and we were unable to recover it. 00:27:49.776 [2024-07-15 18:42:35.257121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.776 [2024-07-15 18:42:35.257194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.776 [2024-07-15 18:42:35.257209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.776 [2024-07-15 18:42:35.257215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.776 [2024-07-15 18:42:35.257221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.776 [2024-07-15 18:42:35.257234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.776 qpair failed and we were unable to recover it. 00:27:49.776 [2024-07-15 18:42:35.267115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.776 [2024-07-15 18:42:35.267167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.776 [2024-07-15 18:42:35.267182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.776 [2024-07-15 18:42:35.267189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.776 [2024-07-15 18:42:35.267195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.776 [2024-07-15 18:42:35.267209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.776 qpair failed and we were unable to recover it. 00:27:49.776 [2024-07-15 18:42:35.277141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.776 [2024-07-15 18:42:35.277213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.776 [2024-07-15 18:42:35.277226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.776 [2024-07-15 18:42:35.277233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.777 [2024-07-15 18:42:35.277238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.777 [2024-07-15 18:42:35.277252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.777 qpair failed and we were unable to recover it. 00:27:49.777 [2024-07-15 18:42:35.287178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.777 [2024-07-15 18:42:35.287234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.777 [2024-07-15 18:42:35.287247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.777 [2024-07-15 18:42:35.287253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.777 [2024-07-15 18:42:35.287259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.777 [2024-07-15 18:42:35.287273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.777 qpair failed and we were unable to recover it. 00:27:49.777 [2024-07-15 18:42:35.297215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.777 [2024-07-15 18:42:35.297263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.777 [2024-07-15 18:42:35.297277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.777 [2024-07-15 18:42:35.297284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.777 [2024-07-15 18:42:35.297289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.777 [2024-07-15 18:42:35.297303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.777 qpair failed and we were unable to recover it. 00:27:49.777 [2024-07-15 18:42:35.307236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.777 [2024-07-15 18:42:35.307291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.777 [2024-07-15 18:42:35.307305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.777 [2024-07-15 18:42:35.307312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.777 [2024-07-15 18:42:35.307317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.777 [2024-07-15 18:42:35.307331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.777 qpair failed and we were unable to recover it. 00:27:49.777 [2024-07-15 18:42:35.317268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.777 [2024-07-15 18:42:35.317323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.777 [2024-07-15 18:42:35.317339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.777 [2024-07-15 18:42:35.317346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.777 [2024-07-15 18:42:35.317352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.777 [2024-07-15 18:42:35.317366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.777 qpair failed and we were unable to recover it. 00:27:49.777 [2024-07-15 18:42:35.327298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.777 [2024-07-15 18:42:35.327356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.777 [2024-07-15 18:42:35.327370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.777 [2024-07-15 18:42:35.327379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.777 [2024-07-15 18:42:35.327385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:49.777 [2024-07-15 18:42:35.327399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.777 qpair failed and we were unable to recover it. 00:27:50.036 [2024-07-15 18:42:35.337326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.036 [2024-07-15 18:42:35.337379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.036 [2024-07-15 18:42:35.337393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.036 [2024-07-15 18:42:35.337400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.036 [2024-07-15 18:42:35.337406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.036 [2024-07-15 18:42:35.337420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.036 qpair failed and we were unable to recover it. 00:27:50.036 [2024-07-15 18:42:35.347352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.036 [2024-07-15 18:42:35.347416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.036 [2024-07-15 18:42:35.347430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.036 [2024-07-15 18:42:35.347437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.036 [2024-07-15 18:42:35.347442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.036 [2024-07-15 18:42:35.347456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.036 qpair failed and we were unable to recover it. 00:27:50.036 [2024-07-15 18:42:35.357388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.036 [2024-07-15 18:42:35.357441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.036 [2024-07-15 18:42:35.357455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.036 [2024-07-15 18:42:35.357461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.036 [2024-07-15 18:42:35.357467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.036 [2024-07-15 18:42:35.357481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.036 qpair failed and we were unable to recover it. 00:27:50.036 [2024-07-15 18:42:35.367408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.036 [2024-07-15 18:42:35.367464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.036 [2024-07-15 18:42:35.367478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.036 [2024-07-15 18:42:35.367484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.036 [2024-07-15 18:42:35.367490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.036 [2024-07-15 18:42:35.367504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.036 qpair failed and we were unable to recover it. 00:27:50.036 [2024-07-15 18:42:35.377450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.036 [2024-07-15 18:42:35.377501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.036 [2024-07-15 18:42:35.377515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.036 [2024-07-15 18:42:35.377522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.036 [2024-07-15 18:42:35.377527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.036 [2024-07-15 18:42:35.377541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.036 qpair failed and we were unable to recover it. 00:27:50.036 [2024-07-15 18:42:35.387462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.036 [2024-07-15 18:42:35.387515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.036 [2024-07-15 18:42:35.387529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.036 [2024-07-15 18:42:35.387535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.036 [2024-07-15 18:42:35.387541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.036 [2024-07-15 18:42:35.387555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.036 qpair failed and we were unable to recover it. 00:27:50.036 [2024-07-15 18:42:35.397553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.036 [2024-07-15 18:42:35.397656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.036 [2024-07-15 18:42:35.397670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.036 [2024-07-15 18:42:35.397676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.036 [2024-07-15 18:42:35.397682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.036 [2024-07-15 18:42:35.397695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.036 qpair failed and we were unable to recover it. 00:27:50.036 [2024-07-15 18:42:35.407531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.036 [2024-07-15 18:42:35.407592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.036 [2024-07-15 18:42:35.407606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.036 [2024-07-15 18:42:35.407612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.036 [2024-07-15 18:42:35.407619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.036 [2024-07-15 18:42:35.407632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.036 qpair failed and we were unable to recover it. 00:27:50.036 [2024-07-15 18:42:35.417541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.036 [2024-07-15 18:42:35.417594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.036 [2024-07-15 18:42:35.417607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.036 [2024-07-15 18:42:35.417617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.036 [2024-07-15 18:42:35.417623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.036 [2024-07-15 18:42:35.417636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.036 qpair failed and we were unable to recover it. 00:27:50.036 [2024-07-15 18:42:35.427580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.036 [2024-07-15 18:42:35.427632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.036 [2024-07-15 18:42:35.427645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.036 [2024-07-15 18:42:35.427651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.036 [2024-07-15 18:42:35.427657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.036 [2024-07-15 18:42:35.427671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.036 qpair failed and we were unable to recover it. 00:27:50.036 [2024-07-15 18:42:35.437621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.036 [2024-07-15 18:42:35.437676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.036 [2024-07-15 18:42:35.437690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.036 [2024-07-15 18:42:35.437697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.036 [2024-07-15 18:42:35.437702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.036 [2024-07-15 18:42:35.437716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.036 qpair failed and we were unable to recover it. 00:27:50.036 [2024-07-15 18:42:35.447655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.036 [2024-07-15 18:42:35.447710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.036 [2024-07-15 18:42:35.447724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.036 [2024-07-15 18:42:35.447730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.036 [2024-07-15 18:42:35.447736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.036 [2024-07-15 18:42:35.447749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.036 qpair failed and we were unable to recover it. 00:27:50.036 [2024-07-15 18:42:35.457713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.036 [2024-07-15 18:42:35.457766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.036 [2024-07-15 18:42:35.457780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.036 [2024-07-15 18:42:35.457786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.036 [2024-07-15 18:42:35.457792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.036 [2024-07-15 18:42:35.457806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.036 qpair failed and we were unable to recover it. 00:27:50.036 [2024-07-15 18:42:35.467742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.036 [2024-07-15 18:42:35.467800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.036 [2024-07-15 18:42:35.467815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.036 [2024-07-15 18:42:35.467821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.036 [2024-07-15 18:42:35.467827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.036 [2024-07-15 18:42:35.467842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.036 qpair failed and we were unable to recover it. 00:27:50.036 [2024-07-15 18:42:35.477724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.036 [2024-07-15 18:42:35.477775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.036 [2024-07-15 18:42:35.477790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.036 [2024-07-15 18:42:35.477796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.036 [2024-07-15 18:42:35.477802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.036 [2024-07-15 18:42:35.477816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.036 qpair failed and we were unable to recover it. 00:27:50.036 [2024-07-15 18:42:35.487801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.036 [2024-07-15 18:42:35.487855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.036 [2024-07-15 18:42:35.487869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.036 [2024-07-15 18:42:35.487876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.036 [2024-07-15 18:42:35.487882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.036 [2024-07-15 18:42:35.487895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.036 qpair failed and we were unable to recover it. 00:27:50.036 [2024-07-15 18:42:35.497777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.036 [2024-07-15 18:42:35.497826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.036 [2024-07-15 18:42:35.497840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.036 [2024-07-15 18:42:35.497846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.036 [2024-07-15 18:42:35.497852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.036 [2024-07-15 18:42:35.497866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.036 qpair failed and we were unable to recover it. 00:27:50.036 [2024-07-15 18:42:35.507840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.036 [2024-07-15 18:42:35.507892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.036 [2024-07-15 18:42:35.507909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.036 [2024-07-15 18:42:35.507916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.036 [2024-07-15 18:42:35.507921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.036 [2024-07-15 18:42:35.507935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.036 qpair failed and we were unable to recover it. 00:27:50.036 [2024-07-15 18:42:35.517862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.036 [2024-07-15 18:42:35.517921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.036 [2024-07-15 18:42:35.517935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.036 [2024-07-15 18:42:35.517941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.036 [2024-07-15 18:42:35.517947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.036 [2024-07-15 18:42:35.517961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.036 qpair failed and we were unable to recover it. 00:27:50.036 [2024-07-15 18:42:35.527892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.036 [2024-07-15 18:42:35.527960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.036 [2024-07-15 18:42:35.527974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.036 [2024-07-15 18:42:35.527980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.036 [2024-07-15 18:42:35.527986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.036 [2024-07-15 18:42:35.527999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.036 qpair failed and we were unable to recover it. 00:27:50.036 [2024-07-15 18:42:35.537902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.036 [2024-07-15 18:42:35.537953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.036 [2024-07-15 18:42:35.537966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.036 [2024-07-15 18:42:35.537972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.036 [2024-07-15 18:42:35.537978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.036 [2024-07-15 18:42:35.537992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.036 qpair failed and we were unable to recover it. 00:27:50.036 [2024-07-15 18:42:35.547925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.037 [2024-07-15 18:42:35.547977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.037 [2024-07-15 18:42:35.547991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.037 [2024-07-15 18:42:35.547997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.037 [2024-07-15 18:42:35.548003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.037 [2024-07-15 18:42:35.548019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.037 qpair failed and we were unable to recover it. 00:27:50.037 [2024-07-15 18:42:35.557964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.037 [2024-07-15 18:42:35.558017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.037 [2024-07-15 18:42:35.558031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.037 [2024-07-15 18:42:35.558037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.037 [2024-07-15 18:42:35.558043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.037 [2024-07-15 18:42:35.558057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.037 qpair failed and we were unable to recover it. 00:27:50.037 [2024-07-15 18:42:35.567985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.037 [2024-07-15 18:42:35.568039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.037 [2024-07-15 18:42:35.568052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.037 [2024-07-15 18:42:35.568058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.037 [2024-07-15 18:42:35.568064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.037 [2024-07-15 18:42:35.568078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.037 qpair failed and we were unable to recover it. 00:27:50.037 [2024-07-15 18:42:35.578011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.037 [2024-07-15 18:42:35.578063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.037 [2024-07-15 18:42:35.578077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.037 [2024-07-15 18:42:35.578083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.037 [2024-07-15 18:42:35.578089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.037 [2024-07-15 18:42:35.578102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.037 qpair failed and we were unable to recover it. 00:27:50.037 [2024-07-15 18:42:35.588037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.037 [2024-07-15 18:42:35.588089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.037 [2024-07-15 18:42:35.588102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.037 [2024-07-15 18:42:35.588109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.037 [2024-07-15 18:42:35.588115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.037 [2024-07-15 18:42:35.588128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.037 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 18:42:35.598085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.295 [2024-07-15 18:42:35.598138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.295 [2024-07-15 18:42:35.598154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.295 [2024-07-15 18:42:35.598161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.295 [2024-07-15 18:42:35.598166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.295 [2024-07-15 18:42:35.598180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 18:42:35.608098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.295 [2024-07-15 18:42:35.608150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.295 [2024-07-15 18:42:35.608164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.295 [2024-07-15 18:42:35.608170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.295 [2024-07-15 18:42:35.608176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.295 [2024-07-15 18:42:35.608190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 18:42:35.618173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.295 [2024-07-15 18:42:35.618224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.295 [2024-07-15 18:42:35.618238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.295 [2024-07-15 18:42:35.618245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.295 [2024-07-15 18:42:35.618250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.295 [2024-07-15 18:42:35.618264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 18:42:35.628159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.295 [2024-07-15 18:42:35.628210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.295 [2024-07-15 18:42:35.628224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.295 [2024-07-15 18:42:35.628230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.295 [2024-07-15 18:42:35.628236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.295 [2024-07-15 18:42:35.628249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 18:42:35.638184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.295 [2024-07-15 18:42:35.638236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.295 [2024-07-15 18:42:35.638250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.295 [2024-07-15 18:42:35.638256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.295 [2024-07-15 18:42:35.638264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.295 [2024-07-15 18:42:35.638278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.295 qpair failed and we were unable to recover it. 00:27:50.295 [2024-07-15 18:42:35.648220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.295 [2024-07-15 18:42:35.648281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.295 [2024-07-15 18:42:35.648295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.295 [2024-07-15 18:42:35.648301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.295 [2024-07-15 18:42:35.648307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.296 [2024-07-15 18:42:35.648321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 18:42:35.658224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.296 [2024-07-15 18:42:35.658277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.296 [2024-07-15 18:42:35.658292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.296 [2024-07-15 18:42:35.658298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.296 [2024-07-15 18:42:35.658304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.296 [2024-07-15 18:42:35.658319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 18:42:35.668262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.296 [2024-07-15 18:42:35.668320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.296 [2024-07-15 18:42:35.668334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.296 [2024-07-15 18:42:35.668352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.296 [2024-07-15 18:42:35.668358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.296 [2024-07-15 18:42:35.668372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 18:42:35.678314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.296 [2024-07-15 18:42:35.678371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.296 [2024-07-15 18:42:35.678385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.296 [2024-07-15 18:42:35.678392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.296 [2024-07-15 18:42:35.678397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.296 [2024-07-15 18:42:35.678412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 18:42:35.688341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.296 [2024-07-15 18:42:35.688398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.296 [2024-07-15 18:42:35.688412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.296 [2024-07-15 18:42:35.688418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.296 [2024-07-15 18:42:35.688424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.296 [2024-07-15 18:42:35.688437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 18:42:35.698351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.296 [2024-07-15 18:42:35.698404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.296 [2024-07-15 18:42:35.698419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.296 [2024-07-15 18:42:35.698426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.296 [2024-07-15 18:42:35.698431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.296 [2024-07-15 18:42:35.698446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 18:42:35.708378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.296 [2024-07-15 18:42:35.708460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.296 [2024-07-15 18:42:35.708474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.296 [2024-07-15 18:42:35.708480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.296 [2024-07-15 18:42:35.708486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.296 [2024-07-15 18:42:35.708501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 18:42:35.718421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.296 [2024-07-15 18:42:35.718474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.296 [2024-07-15 18:42:35.718489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.296 [2024-07-15 18:42:35.718495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.296 [2024-07-15 18:42:35.718501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.296 [2024-07-15 18:42:35.718515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 18:42:35.728456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.296 [2024-07-15 18:42:35.728509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.296 [2024-07-15 18:42:35.728523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.296 [2024-07-15 18:42:35.728532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.296 [2024-07-15 18:42:35.728538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.296 [2024-07-15 18:42:35.728552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 18:42:35.738434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.296 [2024-07-15 18:42:35.738484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.296 [2024-07-15 18:42:35.738498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.296 [2024-07-15 18:42:35.738504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.296 [2024-07-15 18:42:35.738510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.296 [2024-07-15 18:42:35.738524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 18:42:35.748502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.296 [2024-07-15 18:42:35.748558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.296 [2024-07-15 18:42:35.748572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.296 [2024-07-15 18:42:35.748578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.296 [2024-07-15 18:42:35.748584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.296 [2024-07-15 18:42:35.748598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 18:42:35.758592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.296 [2024-07-15 18:42:35.758644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.296 [2024-07-15 18:42:35.758658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.296 [2024-07-15 18:42:35.758664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.296 [2024-07-15 18:42:35.758670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.296 [2024-07-15 18:42:35.758684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 18:42:35.768551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.296 [2024-07-15 18:42:35.768605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.296 [2024-07-15 18:42:35.768619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.296 [2024-07-15 18:42:35.768625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.296 [2024-07-15 18:42:35.768631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.296 [2024-07-15 18:42:35.768644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 18:42:35.778517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.296 [2024-07-15 18:42:35.778570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.296 [2024-07-15 18:42:35.778583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.296 [2024-07-15 18:42:35.778590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.296 [2024-07-15 18:42:35.778596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.296 [2024-07-15 18:42:35.778609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 18:42:35.788622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.296 [2024-07-15 18:42:35.788671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.296 [2024-07-15 18:42:35.788684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.296 [2024-07-15 18:42:35.788690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.296 [2024-07-15 18:42:35.788696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.296 [2024-07-15 18:42:35.788710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 18:42:35.798648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.296 [2024-07-15 18:42:35.798700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.296 [2024-07-15 18:42:35.798714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.296 [2024-07-15 18:42:35.798721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.296 [2024-07-15 18:42:35.798726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.296 [2024-07-15 18:42:35.798740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 18:42:35.808670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.296 [2024-07-15 18:42:35.808724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.296 [2024-07-15 18:42:35.808737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.296 [2024-07-15 18:42:35.808744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.296 [2024-07-15 18:42:35.808749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.296 [2024-07-15 18:42:35.808763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 18:42:35.818702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.296 [2024-07-15 18:42:35.818755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.296 [2024-07-15 18:42:35.818768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.296 [2024-07-15 18:42:35.818778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.296 [2024-07-15 18:42:35.818783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.296 [2024-07-15 18:42:35.818797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 18:42:35.828732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.296 [2024-07-15 18:42:35.828788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.296 [2024-07-15 18:42:35.828801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.296 [2024-07-15 18:42:35.828808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.296 [2024-07-15 18:42:35.828814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.296 [2024-07-15 18:42:35.828828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 18:42:35.838785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.296 [2024-07-15 18:42:35.838834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.296 [2024-07-15 18:42:35.838847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.296 [2024-07-15 18:42:35.838854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.296 [2024-07-15 18:42:35.838860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.296 [2024-07-15 18:42:35.838874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.296 [2024-07-15 18:42:35.848777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.296 [2024-07-15 18:42:35.848831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.296 [2024-07-15 18:42:35.848844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.296 [2024-07-15 18:42:35.848851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.296 [2024-07-15 18:42:35.848856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.296 [2024-07-15 18:42:35.848869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.296 qpair failed and we were unable to recover it. 00:27:50.555 [2024-07-15 18:42:35.858816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.555 [2024-07-15 18:42:35.858872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.555 [2024-07-15 18:42:35.858885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.555 [2024-07-15 18:42:35.858891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.556 [2024-07-15 18:42:35.858897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.556 [2024-07-15 18:42:35.858910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.556 qpair failed and we were unable to recover it. 00:27:50.556 [2024-07-15 18:42:35.868761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.556 [2024-07-15 18:42:35.868818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.556 [2024-07-15 18:42:35.868832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.556 [2024-07-15 18:42:35.868839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.556 [2024-07-15 18:42:35.868844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.556 [2024-07-15 18:42:35.868858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.556 qpair failed and we were unable to recover it. 00:27:50.556 [2024-07-15 18:42:35.878868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.556 [2024-07-15 18:42:35.878922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.556 [2024-07-15 18:42:35.878936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.556 [2024-07-15 18:42:35.878942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.556 [2024-07-15 18:42:35.878947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.556 [2024-07-15 18:42:35.878961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.556 qpair failed and we were unable to recover it. 00:27:50.556 [2024-07-15 18:42:35.888877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.556 [2024-07-15 18:42:35.888927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.556 [2024-07-15 18:42:35.888940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.556 [2024-07-15 18:42:35.888946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.556 [2024-07-15 18:42:35.888952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.556 [2024-07-15 18:42:35.888966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.556 qpair failed and we were unable to recover it. 00:27:50.556 [2024-07-15 18:42:35.898911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.556 [2024-07-15 18:42:35.898965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.556 [2024-07-15 18:42:35.898979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.556 [2024-07-15 18:42:35.898985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.556 [2024-07-15 18:42:35.898991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.556 [2024-07-15 18:42:35.899004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.556 qpair failed and we were unable to recover it. 00:27:50.556 [2024-07-15 18:42:35.908933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.556 [2024-07-15 18:42:35.909011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.556 [2024-07-15 18:42:35.909027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.556 [2024-07-15 18:42:35.909033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.556 [2024-07-15 18:42:35.909039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.556 [2024-07-15 18:42:35.909053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.556 qpair failed and we were unable to recover it. 00:27:50.556 [2024-07-15 18:42:35.918916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.556 [2024-07-15 18:42:35.918974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.556 [2024-07-15 18:42:35.918988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.556 [2024-07-15 18:42:35.918994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.556 [2024-07-15 18:42:35.919000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.556 [2024-07-15 18:42:35.919014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.556 qpair failed and we were unable to recover it. 00:27:50.556 [2024-07-15 18:42:35.929001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.556 [2024-07-15 18:42:35.929054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.556 [2024-07-15 18:42:35.929068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.556 [2024-07-15 18:42:35.929074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.556 [2024-07-15 18:42:35.929080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.556 [2024-07-15 18:42:35.929094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.556 qpair failed and we were unable to recover it. 00:27:50.556 [2024-07-15 18:42:35.939037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.556 [2024-07-15 18:42:35.939085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.556 [2024-07-15 18:42:35.939099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.556 [2024-07-15 18:42:35.939105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.556 [2024-07-15 18:42:35.939111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.556 [2024-07-15 18:42:35.939124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.556 qpair failed and we were unable to recover it. 00:27:50.556 [2024-07-15 18:42:35.949047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.556 [2024-07-15 18:42:35.949097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.556 [2024-07-15 18:42:35.949111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.556 [2024-07-15 18:42:35.949117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.556 [2024-07-15 18:42:35.949123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.556 [2024-07-15 18:42:35.949139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.556 qpair failed and we were unable to recover it. 00:27:50.556 [2024-07-15 18:42:35.959091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.556 [2024-07-15 18:42:35.959153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.556 [2024-07-15 18:42:35.959167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.556 [2024-07-15 18:42:35.959173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.556 [2024-07-15 18:42:35.959179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.556 [2024-07-15 18:42:35.959193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.556 qpair failed and we were unable to recover it. 00:27:50.556 [2024-07-15 18:42:35.969105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.556 [2024-07-15 18:42:35.969157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.556 [2024-07-15 18:42:35.969171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.556 [2024-07-15 18:42:35.969177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.556 [2024-07-15 18:42:35.969183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.556 [2024-07-15 18:42:35.969196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.556 qpair failed and we were unable to recover it. 00:27:50.556 [2024-07-15 18:42:35.979137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.556 [2024-07-15 18:42:35.979214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.556 [2024-07-15 18:42:35.979228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.556 [2024-07-15 18:42:35.979234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.556 [2024-07-15 18:42:35.979240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.556 [2024-07-15 18:42:35.979253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.556 qpair failed and we were unable to recover it. 00:27:50.556 [2024-07-15 18:42:35.989145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.556 [2024-07-15 18:42:35.989197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.556 [2024-07-15 18:42:35.989211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.556 [2024-07-15 18:42:35.989218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.556 [2024-07-15 18:42:35.989223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.556 [2024-07-15 18:42:35.989238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.556 qpair failed and we were unable to recover it. 00:27:50.556 [2024-07-15 18:42:35.999194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.556 [2024-07-15 18:42:35.999270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.556 [2024-07-15 18:42:35.999287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.556 [2024-07-15 18:42:35.999293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.556 [2024-07-15 18:42:35.999299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.556 [2024-07-15 18:42:35.999312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.556 qpair failed and we were unable to recover it. 00:27:50.556 [2024-07-15 18:42:36.009269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.556 [2024-07-15 18:42:36.009362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.556 [2024-07-15 18:42:36.009376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.556 [2024-07-15 18:42:36.009382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.556 [2024-07-15 18:42:36.009388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.556 [2024-07-15 18:42:36.009402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.556 qpair failed and we were unable to recover it. 00:27:50.556 [2024-07-15 18:42:36.019272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.556 [2024-07-15 18:42:36.019325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.556 [2024-07-15 18:42:36.019342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.556 [2024-07-15 18:42:36.019349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.556 [2024-07-15 18:42:36.019355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.556 [2024-07-15 18:42:36.019369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.556 qpair failed and we were unable to recover it. 00:27:50.556 [2024-07-15 18:42:36.029273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.556 [2024-07-15 18:42:36.029322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.556 [2024-07-15 18:42:36.029339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.556 [2024-07-15 18:42:36.029346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.556 [2024-07-15 18:42:36.029352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.556 [2024-07-15 18:42:36.029366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.556 qpair failed and we were unable to recover it. 00:27:50.556 [2024-07-15 18:42:36.039332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.556 [2024-07-15 18:42:36.039388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.556 [2024-07-15 18:42:36.039402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.556 [2024-07-15 18:42:36.039407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.556 [2024-07-15 18:42:36.039416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.556 [2024-07-15 18:42:36.039430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.556 qpair failed and we were unable to recover it. 00:27:50.556 [2024-07-15 18:42:36.049333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.556 [2024-07-15 18:42:36.049388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.556 [2024-07-15 18:42:36.049402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.556 [2024-07-15 18:42:36.049408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.556 [2024-07-15 18:42:36.049414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.556 [2024-07-15 18:42:36.049427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.556 qpair failed and we were unable to recover it. 00:27:50.556 [2024-07-15 18:42:36.059376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.556 [2024-07-15 18:42:36.059445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.556 [2024-07-15 18:42:36.059458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.556 [2024-07-15 18:42:36.059464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.556 [2024-07-15 18:42:36.059470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.556 [2024-07-15 18:42:36.059484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.556 qpair failed and we were unable to recover it. 00:27:50.556 [2024-07-15 18:42:36.069394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.556 [2024-07-15 18:42:36.069445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.556 [2024-07-15 18:42:36.069458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.556 [2024-07-15 18:42:36.069465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.556 [2024-07-15 18:42:36.069471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.556 [2024-07-15 18:42:36.069484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.556 qpair failed and we were unable to recover it. 00:27:50.556 [2024-07-15 18:42:36.079433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.557 [2024-07-15 18:42:36.079485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.557 [2024-07-15 18:42:36.079498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.557 [2024-07-15 18:42:36.079504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.557 [2024-07-15 18:42:36.079510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.557 [2024-07-15 18:42:36.079523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.557 qpair failed and we were unable to recover it. 00:27:50.557 [2024-07-15 18:42:36.089478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.557 [2024-07-15 18:42:36.089536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.557 [2024-07-15 18:42:36.089550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.557 [2024-07-15 18:42:36.089556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.557 [2024-07-15 18:42:36.089562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.557 [2024-07-15 18:42:36.089575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.557 qpair failed and we were unable to recover it. 00:27:50.557 [2024-07-15 18:42:36.099514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.557 [2024-07-15 18:42:36.099572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.557 [2024-07-15 18:42:36.099585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.557 [2024-07-15 18:42:36.099592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.557 [2024-07-15 18:42:36.099597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.557 [2024-07-15 18:42:36.099611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.557 qpair failed and we were unable to recover it. 00:27:50.557 [2024-07-15 18:42:36.109501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.557 [2024-07-15 18:42:36.109553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.557 [2024-07-15 18:42:36.109567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.557 [2024-07-15 18:42:36.109574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.557 [2024-07-15 18:42:36.109579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.557 [2024-07-15 18:42:36.109594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.557 qpair failed and we were unable to recover it. 00:27:50.815 [2024-07-15 18:42:36.119469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.815 [2024-07-15 18:42:36.119525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.815 [2024-07-15 18:42:36.119538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.816 [2024-07-15 18:42:36.119544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.816 [2024-07-15 18:42:36.119549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.816 [2024-07-15 18:42:36.119563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.816 qpair failed and we were unable to recover it. 00:27:50.816 [2024-07-15 18:42:36.129582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.816 [2024-07-15 18:42:36.129636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.816 [2024-07-15 18:42:36.129649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.816 [2024-07-15 18:42:36.129656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.816 [2024-07-15 18:42:36.129667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.816 [2024-07-15 18:42:36.129681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.816 qpair failed and we were unable to recover it. 00:27:50.816 [2024-07-15 18:42:36.139599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.816 [2024-07-15 18:42:36.139649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.816 [2024-07-15 18:42:36.139662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.816 [2024-07-15 18:42:36.139668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.816 [2024-07-15 18:42:36.139674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.816 [2024-07-15 18:42:36.139687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.816 qpair failed and we were unable to recover it. 00:27:50.816 [2024-07-15 18:42:36.149631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.816 [2024-07-15 18:42:36.149684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.816 [2024-07-15 18:42:36.149697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.816 [2024-07-15 18:42:36.149703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.816 [2024-07-15 18:42:36.149709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.816 [2024-07-15 18:42:36.149723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.816 qpair failed and we were unable to recover it. 00:27:50.816 [2024-07-15 18:42:36.159670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.816 [2024-07-15 18:42:36.159725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.816 [2024-07-15 18:42:36.159738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.816 [2024-07-15 18:42:36.159744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.816 [2024-07-15 18:42:36.159750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.816 [2024-07-15 18:42:36.159764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.816 qpair failed and we were unable to recover it. 00:27:50.816 [2024-07-15 18:42:36.169698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.816 [2024-07-15 18:42:36.169751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.816 [2024-07-15 18:42:36.169765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.816 [2024-07-15 18:42:36.169771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.816 [2024-07-15 18:42:36.169777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.816 [2024-07-15 18:42:36.169791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.816 qpair failed and we were unable to recover it. 00:27:50.816 [2024-07-15 18:42:36.179711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.816 [2024-07-15 18:42:36.179758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.816 [2024-07-15 18:42:36.179771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.816 [2024-07-15 18:42:36.179778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.816 [2024-07-15 18:42:36.179784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.816 [2024-07-15 18:42:36.179798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.816 qpair failed and we were unable to recover it. 00:27:50.816 [2024-07-15 18:42:36.189741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.816 [2024-07-15 18:42:36.189794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.816 [2024-07-15 18:42:36.189808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.816 [2024-07-15 18:42:36.189814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.816 [2024-07-15 18:42:36.189820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.816 [2024-07-15 18:42:36.189834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.816 qpair failed and we were unable to recover it. 00:27:50.816 [2024-07-15 18:42:36.199771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.816 [2024-07-15 18:42:36.199822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.816 [2024-07-15 18:42:36.199836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.816 [2024-07-15 18:42:36.199842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.816 [2024-07-15 18:42:36.199848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.816 [2024-07-15 18:42:36.199861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.816 qpair failed and we were unable to recover it. 00:27:50.816 [2024-07-15 18:42:36.209849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.816 [2024-07-15 18:42:36.209904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.816 [2024-07-15 18:42:36.209918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.816 [2024-07-15 18:42:36.209924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.816 [2024-07-15 18:42:36.209930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.816 [2024-07-15 18:42:36.209943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.816 qpair failed and we were unable to recover it. 00:27:50.816 [2024-07-15 18:42:36.219812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.816 [2024-07-15 18:42:36.219864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.816 [2024-07-15 18:42:36.219878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.816 [2024-07-15 18:42:36.219887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.816 [2024-07-15 18:42:36.219893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.816 [2024-07-15 18:42:36.219906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.816 qpair failed and we were unable to recover it. 00:27:50.816 [2024-07-15 18:42:36.229780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.816 [2024-07-15 18:42:36.229832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.816 [2024-07-15 18:42:36.229845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.816 [2024-07-15 18:42:36.229852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.816 [2024-07-15 18:42:36.229858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.816 [2024-07-15 18:42:36.229872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.816 qpair failed and we were unable to recover it. 00:27:50.816 [2024-07-15 18:42:36.239813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.816 [2024-07-15 18:42:36.239866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.816 [2024-07-15 18:42:36.239880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.816 [2024-07-15 18:42:36.239886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.816 [2024-07-15 18:42:36.239892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.816 [2024-07-15 18:42:36.239906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.816 qpair failed and we were unable to recover it. 00:27:50.816 [2024-07-15 18:42:36.249830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.816 [2024-07-15 18:42:36.249887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.816 [2024-07-15 18:42:36.249901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.816 [2024-07-15 18:42:36.249907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.816 [2024-07-15 18:42:36.249913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.816 [2024-07-15 18:42:36.249927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.816 qpair failed and we were unable to recover it. 00:27:50.816 [2024-07-15 18:42:36.259868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.816 [2024-07-15 18:42:36.259917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.816 [2024-07-15 18:42:36.259930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.816 [2024-07-15 18:42:36.259937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.816 [2024-07-15 18:42:36.259943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.817 [2024-07-15 18:42:36.259957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.817 qpair failed and we were unable to recover it. 00:27:50.817 [2024-07-15 18:42:36.269890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.817 [2024-07-15 18:42:36.269991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.817 [2024-07-15 18:42:36.270009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.817 [2024-07-15 18:42:36.270017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.817 [2024-07-15 18:42:36.270023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.817 [2024-07-15 18:42:36.270039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.817 qpair failed and we were unable to recover it. 00:27:50.817 [2024-07-15 18:42:36.279910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.817 [2024-07-15 18:42:36.279967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.817 [2024-07-15 18:42:36.279982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.817 [2024-07-15 18:42:36.279989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.817 [2024-07-15 18:42:36.279995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.817 [2024-07-15 18:42:36.280010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.817 qpair failed and we were unable to recover it. 00:27:50.817 [2024-07-15 18:42:36.290001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.817 [2024-07-15 18:42:36.290056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.817 [2024-07-15 18:42:36.290071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.817 [2024-07-15 18:42:36.290078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.817 [2024-07-15 18:42:36.290083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.817 [2024-07-15 18:42:36.290098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.817 qpair failed and we were unable to recover it. 00:27:50.817 [2024-07-15 18:42:36.300043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.817 [2024-07-15 18:42:36.300099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.817 [2024-07-15 18:42:36.300113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.817 [2024-07-15 18:42:36.300119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.817 [2024-07-15 18:42:36.300125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.817 [2024-07-15 18:42:36.300139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.817 qpair failed and we were unable to recover it. 00:27:50.817 [2024-07-15 18:42:36.310022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.817 [2024-07-15 18:42:36.310072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.817 [2024-07-15 18:42:36.310090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.817 [2024-07-15 18:42:36.310096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.817 [2024-07-15 18:42:36.310102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.817 [2024-07-15 18:42:36.310116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.817 qpair failed and we were unable to recover it. 00:27:50.817 [2024-07-15 18:42:36.320064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.817 [2024-07-15 18:42:36.320119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.817 [2024-07-15 18:42:36.320133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.817 [2024-07-15 18:42:36.320140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.817 [2024-07-15 18:42:36.320145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.817 [2024-07-15 18:42:36.320159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.817 qpair failed and we were unable to recover it. 00:27:50.817 [2024-07-15 18:42:36.330050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.817 [2024-07-15 18:42:36.330108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.817 [2024-07-15 18:42:36.330122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.817 [2024-07-15 18:42:36.330129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.817 [2024-07-15 18:42:36.330134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.817 [2024-07-15 18:42:36.330149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.817 qpair failed and we were unable to recover it. 00:27:50.817 [2024-07-15 18:42:36.340066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.817 [2024-07-15 18:42:36.340126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.817 [2024-07-15 18:42:36.340140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.817 [2024-07-15 18:42:36.340146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.817 [2024-07-15 18:42:36.340152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.817 [2024-07-15 18:42:36.340166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.817 qpair failed and we were unable to recover it. 00:27:50.817 [2024-07-15 18:42:36.350191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.817 [2024-07-15 18:42:36.350279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.817 [2024-07-15 18:42:36.350293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.817 [2024-07-15 18:42:36.350299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.817 [2024-07-15 18:42:36.350304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.817 [2024-07-15 18:42:36.350322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.817 qpair failed and we were unable to recover it. 00:27:50.817 [2024-07-15 18:42:36.360198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.817 [2024-07-15 18:42:36.360277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.817 [2024-07-15 18:42:36.360292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.817 [2024-07-15 18:42:36.360298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.817 [2024-07-15 18:42:36.360304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.817 [2024-07-15 18:42:36.360318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.817 qpair failed and we were unable to recover it. 00:27:50.817 [2024-07-15 18:42:36.370138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.817 [2024-07-15 18:42:36.370209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.817 [2024-07-15 18:42:36.370223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.817 [2024-07-15 18:42:36.370229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.817 [2024-07-15 18:42:36.370235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:50.817 [2024-07-15 18:42:36.370250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.817 qpair failed and we were unable to recover it. 00:27:51.077 [2024-07-15 18:42:36.380180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.077 [2024-07-15 18:42:36.380234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.077 [2024-07-15 18:42:36.380248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.077 [2024-07-15 18:42:36.380255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.077 [2024-07-15 18:42:36.380260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.077 [2024-07-15 18:42:36.380274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.077 qpair failed and we were unable to recover it. 00:27:51.077 [2024-07-15 18:42:36.390312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.077 [2024-07-15 18:42:36.390370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.077 [2024-07-15 18:42:36.390384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.077 [2024-07-15 18:42:36.390391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.077 [2024-07-15 18:42:36.390396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.077 [2024-07-15 18:42:36.390410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.077 qpair failed and we were unable to recover it. 00:27:51.077 [2024-07-15 18:42:36.400287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.077 [2024-07-15 18:42:36.400353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.077 [2024-07-15 18:42:36.400371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.077 [2024-07-15 18:42:36.400377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.077 [2024-07-15 18:42:36.400382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.077 [2024-07-15 18:42:36.400396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.077 qpair failed and we were unable to recover it. 00:27:51.077 [2024-07-15 18:42:36.410367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.077 [2024-07-15 18:42:36.410439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.077 [2024-07-15 18:42:36.410453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.077 [2024-07-15 18:42:36.410460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.077 [2024-07-15 18:42:36.410465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.077 [2024-07-15 18:42:36.410479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.077 qpair failed and we were unable to recover it. 00:27:51.077 [2024-07-15 18:42:36.420375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.077 [2024-07-15 18:42:36.420427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.077 [2024-07-15 18:42:36.420441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.077 [2024-07-15 18:42:36.420447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.077 [2024-07-15 18:42:36.420453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.077 [2024-07-15 18:42:36.420466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.077 qpair failed and we were unable to recover it. 00:27:51.077 [2024-07-15 18:42:36.430317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.077 [2024-07-15 18:42:36.430382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.077 [2024-07-15 18:42:36.430396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.077 [2024-07-15 18:42:36.430402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.077 [2024-07-15 18:42:36.430408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.077 [2024-07-15 18:42:36.430421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.077 qpair failed and we were unable to recover it. 00:27:51.077 [2024-07-15 18:42:36.440435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.077 [2024-07-15 18:42:36.440506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.077 [2024-07-15 18:42:36.440519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.077 [2024-07-15 18:42:36.440526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.077 [2024-07-15 18:42:36.440535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.077 [2024-07-15 18:42:36.440549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.077 qpair failed and we were unable to recover it. 00:27:51.077 [2024-07-15 18:42:36.450454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.077 [2024-07-15 18:42:36.450531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.077 [2024-07-15 18:42:36.450545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.077 [2024-07-15 18:42:36.450552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.077 [2024-07-15 18:42:36.450557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.077 [2024-07-15 18:42:36.450571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.077 qpair failed and we were unable to recover it. 00:27:51.077 [2024-07-15 18:42:36.460427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.077 [2024-07-15 18:42:36.460474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.077 [2024-07-15 18:42:36.460488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.077 [2024-07-15 18:42:36.460494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.077 [2024-07-15 18:42:36.460500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.077 [2024-07-15 18:42:36.460515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.077 qpair failed and we were unable to recover it. 00:27:51.077 [2024-07-15 18:42:36.470509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.077 [2024-07-15 18:42:36.470561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.077 [2024-07-15 18:42:36.470576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.077 [2024-07-15 18:42:36.470582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.077 [2024-07-15 18:42:36.470588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.077 [2024-07-15 18:42:36.470603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.077 qpair failed and we were unable to recover it. 00:27:51.077 [2024-07-15 18:42:36.480548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.077 [2024-07-15 18:42:36.480600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.077 [2024-07-15 18:42:36.480614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.077 [2024-07-15 18:42:36.480621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.077 [2024-07-15 18:42:36.480626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.077 [2024-07-15 18:42:36.480641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.077 qpair failed and we were unable to recover it. 00:27:51.077 [2024-07-15 18:42:36.490508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.077 [2024-07-15 18:42:36.490563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.077 [2024-07-15 18:42:36.490577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.077 [2024-07-15 18:42:36.490583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.077 [2024-07-15 18:42:36.490588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.077 [2024-07-15 18:42:36.490603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.077 qpair failed and we were unable to recover it. 00:27:51.077 [2024-07-15 18:42:36.500590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.077 [2024-07-15 18:42:36.500647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.077 [2024-07-15 18:42:36.500661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.077 [2024-07-15 18:42:36.500667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.077 [2024-07-15 18:42:36.500673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.077 [2024-07-15 18:42:36.500687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.077 qpair failed and we were unable to recover it. 00:27:51.077 [2024-07-15 18:42:36.510597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.077 [2024-07-15 18:42:36.510654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.077 [2024-07-15 18:42:36.510668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.077 [2024-07-15 18:42:36.510674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.077 [2024-07-15 18:42:36.510680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.078 [2024-07-15 18:42:36.510693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.078 qpair failed and we were unable to recover it. 00:27:51.078 [2024-07-15 18:42:36.520686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.078 [2024-07-15 18:42:36.520740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.078 [2024-07-15 18:42:36.520754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.078 [2024-07-15 18:42:36.520761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.078 [2024-07-15 18:42:36.520766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.078 [2024-07-15 18:42:36.520780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.078 qpair failed and we were unable to recover it. 00:27:51.078 [2024-07-15 18:42:36.530661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.078 [2024-07-15 18:42:36.530732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.078 [2024-07-15 18:42:36.530746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.078 [2024-07-15 18:42:36.530752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.078 [2024-07-15 18:42:36.530761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.078 [2024-07-15 18:42:36.530775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.078 qpair failed and we were unable to recover it. 00:27:51.078 [2024-07-15 18:42:36.540716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.078 [2024-07-15 18:42:36.540771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.078 [2024-07-15 18:42:36.540785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.078 [2024-07-15 18:42:36.540791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.078 [2024-07-15 18:42:36.540797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.078 [2024-07-15 18:42:36.540811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.078 qpair failed and we were unable to recover it. 00:27:51.078 [2024-07-15 18:42:36.550676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.078 [2024-07-15 18:42:36.550731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.078 [2024-07-15 18:42:36.550745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.078 [2024-07-15 18:42:36.550751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.078 [2024-07-15 18:42:36.550756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.078 [2024-07-15 18:42:36.550770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.078 qpair failed and we were unable to recover it. 00:27:51.078 [2024-07-15 18:42:36.560722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.078 [2024-07-15 18:42:36.560772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.078 [2024-07-15 18:42:36.560786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.078 [2024-07-15 18:42:36.560793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.078 [2024-07-15 18:42:36.560798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.078 [2024-07-15 18:42:36.560812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.078 qpair failed and we were unable to recover it. 00:27:51.078 [2024-07-15 18:42:36.570785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.078 [2024-07-15 18:42:36.570840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.078 [2024-07-15 18:42:36.570854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.078 [2024-07-15 18:42:36.570861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.078 [2024-07-15 18:42:36.570866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.078 [2024-07-15 18:42:36.570881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.078 qpair failed and we were unable to recover it. 00:27:51.078 [2024-07-15 18:42:36.580840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.078 [2024-07-15 18:42:36.580893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.078 [2024-07-15 18:42:36.580906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.078 [2024-07-15 18:42:36.580913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.078 [2024-07-15 18:42:36.580919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.078 [2024-07-15 18:42:36.580933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.078 qpair failed and we were unable to recover it. 00:27:51.078 [2024-07-15 18:42:36.590833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.078 [2024-07-15 18:42:36.590880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.078 [2024-07-15 18:42:36.590893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.078 [2024-07-15 18:42:36.590900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.078 [2024-07-15 18:42:36.590905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.078 [2024-07-15 18:42:36.590919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.078 qpair failed and we were unable to recover it. 00:27:51.078 [2024-07-15 18:42:36.600875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.078 [2024-07-15 18:42:36.600928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.078 [2024-07-15 18:42:36.600942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.078 [2024-07-15 18:42:36.600948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.078 [2024-07-15 18:42:36.600954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.078 [2024-07-15 18:42:36.600968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.078 qpair failed and we were unable to recover it. 00:27:51.078 [2024-07-15 18:42:36.610839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.078 [2024-07-15 18:42:36.610890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.078 [2024-07-15 18:42:36.610903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.078 [2024-07-15 18:42:36.610909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.078 [2024-07-15 18:42:36.610915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.078 [2024-07-15 18:42:36.610929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.078 qpair failed and we were unable to recover it. 00:27:51.078 [2024-07-15 18:42:36.620915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.078 [2024-07-15 18:42:36.620967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.078 [2024-07-15 18:42:36.620981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.078 [2024-07-15 18:42:36.620991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.078 [2024-07-15 18:42:36.620996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.078 [2024-07-15 18:42:36.621010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.078 qpair failed and we were unable to recover it. 00:27:51.078 [2024-07-15 18:42:36.630984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.078 [2024-07-15 18:42:36.631037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.078 [2024-07-15 18:42:36.631051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.078 [2024-07-15 18:42:36.631057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.078 [2024-07-15 18:42:36.631063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.078 [2024-07-15 18:42:36.631077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.078 qpair failed and we were unable to recover it. 00:27:51.338 [2024-07-15 18:42:36.640969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.338 [2024-07-15 18:42:36.641027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.338 [2024-07-15 18:42:36.641041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.338 [2024-07-15 18:42:36.641047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.338 [2024-07-15 18:42:36.641053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.338 [2024-07-15 18:42:36.641066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.338 qpair failed and we were unable to recover it. 00:27:51.338 [2024-07-15 18:42:36.650974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.338 [2024-07-15 18:42:36.651032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.338 [2024-07-15 18:42:36.651046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.338 [2024-07-15 18:42:36.651052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.338 [2024-07-15 18:42:36.651058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.338 [2024-07-15 18:42:36.651071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.338 qpair failed and we were unable to recover it. 00:27:51.338 [2024-07-15 18:42:36.661047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.338 [2024-07-15 18:42:36.661097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.338 [2024-07-15 18:42:36.661111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.338 [2024-07-15 18:42:36.661117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.338 [2024-07-15 18:42:36.661123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.338 [2024-07-15 18:42:36.661136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.338 qpair failed and we were unable to recover it. 00:27:51.338 [2024-07-15 18:42:36.671059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.338 [2024-07-15 18:42:36.671109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.338 [2024-07-15 18:42:36.671123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.338 [2024-07-15 18:42:36.671129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.338 [2024-07-15 18:42:36.671135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.338 [2024-07-15 18:42:36.671149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.338 qpair failed and we were unable to recover it. 00:27:51.338 [2024-07-15 18:42:36.681111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.338 [2024-07-15 18:42:36.681162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.338 [2024-07-15 18:42:36.681176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.338 [2024-07-15 18:42:36.681183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.338 [2024-07-15 18:42:36.681189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.338 [2024-07-15 18:42:36.681203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.338 qpair failed and we were unable to recover it. 00:27:51.338 [2024-07-15 18:42:36.691159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.338 [2024-07-15 18:42:36.691234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.338 [2024-07-15 18:42:36.691248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.338 [2024-07-15 18:42:36.691254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.338 [2024-07-15 18:42:36.691260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.338 [2024-07-15 18:42:36.691274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.338 qpair failed and we were unable to recover it. 00:27:51.338 [2024-07-15 18:42:36.701074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.338 [2024-07-15 18:42:36.701133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.338 [2024-07-15 18:42:36.701147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.338 [2024-07-15 18:42:36.701153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.338 [2024-07-15 18:42:36.701159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.338 [2024-07-15 18:42:36.701173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.338 qpair failed and we were unable to recover it. 00:27:51.338 [2024-07-15 18:42:36.711186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.338 [2024-07-15 18:42:36.711248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.338 [2024-07-15 18:42:36.711264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.338 [2024-07-15 18:42:36.711271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.338 [2024-07-15 18:42:36.711276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.338 [2024-07-15 18:42:36.711290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.338 qpair failed and we were unable to recover it. 00:27:51.338 [2024-07-15 18:42:36.721225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.339 [2024-07-15 18:42:36.721301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.339 [2024-07-15 18:42:36.721315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.339 [2024-07-15 18:42:36.721321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.339 [2024-07-15 18:42:36.721327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.339 [2024-07-15 18:42:36.721347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.339 qpair failed and we were unable to recover it. 00:27:51.339 [2024-07-15 18:42:36.731237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.339 [2024-07-15 18:42:36.731293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.339 [2024-07-15 18:42:36.731306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.339 [2024-07-15 18:42:36.731313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.339 [2024-07-15 18:42:36.731319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.339 [2024-07-15 18:42:36.731332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.339 qpair failed and we were unable to recover it. 00:27:51.339 [2024-07-15 18:42:36.741267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.339 [2024-07-15 18:42:36.741321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.339 [2024-07-15 18:42:36.741335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.339 [2024-07-15 18:42:36.741344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.339 [2024-07-15 18:42:36.741350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.339 [2024-07-15 18:42:36.741364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.339 qpair failed and we were unable to recover it. 00:27:51.339 [2024-07-15 18:42:36.751286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.339 [2024-07-15 18:42:36.751341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.339 [2024-07-15 18:42:36.751355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.339 [2024-07-15 18:42:36.751361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.339 [2024-07-15 18:42:36.751367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.339 [2024-07-15 18:42:36.751383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.339 qpair failed and we were unable to recover it. 00:27:51.339 [2024-07-15 18:42:36.761346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.339 [2024-07-15 18:42:36.761401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.339 [2024-07-15 18:42:36.761415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.339 [2024-07-15 18:42:36.761422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.339 [2024-07-15 18:42:36.761428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.339 [2024-07-15 18:42:36.761442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.339 qpair failed and we were unable to recover it. 00:27:51.339 [2024-07-15 18:42:36.771341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.339 [2024-07-15 18:42:36.771394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.339 [2024-07-15 18:42:36.771408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.339 [2024-07-15 18:42:36.771414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.339 [2024-07-15 18:42:36.771420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.339 [2024-07-15 18:42:36.771434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.339 qpair failed and we were unable to recover it. 00:27:51.339 [2024-07-15 18:42:36.781369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.339 [2024-07-15 18:42:36.781420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.339 [2024-07-15 18:42:36.781434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.339 [2024-07-15 18:42:36.781440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.339 [2024-07-15 18:42:36.781445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.339 [2024-07-15 18:42:36.781459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.339 qpair failed and we were unable to recover it. 00:27:51.339 [2024-07-15 18:42:36.791414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.339 [2024-07-15 18:42:36.791478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.339 [2024-07-15 18:42:36.791492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.339 [2024-07-15 18:42:36.791498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.339 [2024-07-15 18:42:36.791504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.339 [2024-07-15 18:42:36.791517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.339 qpair failed and we were unable to recover it. 00:27:51.339 [2024-07-15 18:42:36.801444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.339 [2024-07-15 18:42:36.801498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.339 [2024-07-15 18:42:36.801514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.339 [2024-07-15 18:42:36.801521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.339 [2024-07-15 18:42:36.801526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.339 [2024-07-15 18:42:36.801540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.339 qpair failed and we were unable to recover it. 00:27:51.339 [2024-07-15 18:42:36.811425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.339 [2024-07-15 18:42:36.811482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.339 [2024-07-15 18:42:36.811496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.339 [2024-07-15 18:42:36.811502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.339 [2024-07-15 18:42:36.811508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.339 [2024-07-15 18:42:36.811522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.339 qpair failed and we were unable to recover it. 00:27:51.339 [2024-07-15 18:42:36.821467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.339 [2024-07-15 18:42:36.821520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.339 [2024-07-15 18:42:36.821534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.339 [2024-07-15 18:42:36.821540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.339 [2024-07-15 18:42:36.821545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.339 [2024-07-15 18:42:36.821559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.339 qpair failed and we were unable to recover it. 00:27:51.339 [2024-07-15 18:42:36.831515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.339 [2024-07-15 18:42:36.831565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.339 [2024-07-15 18:42:36.831579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.339 [2024-07-15 18:42:36.831585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.339 [2024-07-15 18:42:36.831591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.339 [2024-07-15 18:42:36.831605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.339 qpair failed and we were unable to recover it. 00:27:51.339 [2024-07-15 18:42:36.841543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.339 [2024-07-15 18:42:36.841593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.339 [2024-07-15 18:42:36.841606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.339 [2024-07-15 18:42:36.841612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.339 [2024-07-15 18:42:36.841618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.339 [2024-07-15 18:42:36.841637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.339 qpair failed and we were unable to recover it. 00:27:51.339 [2024-07-15 18:42:36.851576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.339 [2024-07-15 18:42:36.851639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.339 [2024-07-15 18:42:36.851653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.339 [2024-07-15 18:42:36.851659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.339 [2024-07-15 18:42:36.851665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.339 [2024-07-15 18:42:36.851678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.339 qpair failed and we were unable to recover it. 00:27:51.339 [2024-07-15 18:42:36.861594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.340 [2024-07-15 18:42:36.861640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.340 [2024-07-15 18:42:36.861654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.340 [2024-07-15 18:42:36.861661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.340 [2024-07-15 18:42:36.861667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.340 [2024-07-15 18:42:36.861680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.340 qpair failed and we were unable to recover it. 00:27:51.340 [2024-07-15 18:42:36.871630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.340 [2024-07-15 18:42:36.871681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.340 [2024-07-15 18:42:36.871695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.340 [2024-07-15 18:42:36.871701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.340 [2024-07-15 18:42:36.871707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.340 [2024-07-15 18:42:36.871721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.340 qpair failed and we were unable to recover it. 00:27:51.340 [2024-07-15 18:42:36.881640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.340 [2024-07-15 18:42:36.881723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.340 [2024-07-15 18:42:36.881736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.340 [2024-07-15 18:42:36.881742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.340 [2024-07-15 18:42:36.881748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.340 [2024-07-15 18:42:36.881762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.340 qpair failed and we were unable to recover it. 00:27:51.340 [2024-07-15 18:42:36.891689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.340 [2024-07-15 18:42:36.891749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.340 [2024-07-15 18:42:36.891763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.340 [2024-07-15 18:42:36.891769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.340 [2024-07-15 18:42:36.891775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.340 [2024-07-15 18:42:36.891788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.340 qpair failed and we were unable to recover it. 00:27:51.599 [2024-07-15 18:42:36.901743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.599 [2024-07-15 18:42:36.901804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.599 [2024-07-15 18:42:36.901817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.599 [2024-07-15 18:42:36.901824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.599 [2024-07-15 18:42:36.901829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.599 [2024-07-15 18:42:36.901843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.599 qpair failed and we were unable to recover it. 00:27:51.599 [2024-07-15 18:42:36.911791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.599 [2024-07-15 18:42:36.911850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.599 [2024-07-15 18:42:36.911864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.599 [2024-07-15 18:42:36.911871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.599 [2024-07-15 18:42:36.911876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.599 [2024-07-15 18:42:36.911890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.599 qpair failed and we were unable to recover it. 00:27:51.599 [2024-07-15 18:42:36.921799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.599 [2024-07-15 18:42:36.921856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.599 [2024-07-15 18:42:36.921870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.599 [2024-07-15 18:42:36.921877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.599 [2024-07-15 18:42:36.921882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.599 [2024-07-15 18:42:36.921896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.599 qpair failed and we were unable to recover it. 00:27:51.599 [2024-07-15 18:42:36.931798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.599 [2024-07-15 18:42:36.931854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.599 [2024-07-15 18:42:36.931868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.599 [2024-07-15 18:42:36.931874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.599 [2024-07-15 18:42:36.931882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.599 [2024-07-15 18:42:36.931896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.599 qpair failed and we were unable to recover it. 00:27:51.599 [2024-07-15 18:42:36.941821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.599 [2024-07-15 18:42:36.941868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.599 [2024-07-15 18:42:36.941882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.599 [2024-07-15 18:42:36.941888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.599 [2024-07-15 18:42:36.941894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.599 [2024-07-15 18:42:36.941908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.599 qpair failed and we were unable to recover it. 00:27:51.599 [2024-07-15 18:42:36.951856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.599 [2024-07-15 18:42:36.951909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.599 [2024-07-15 18:42:36.951923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.599 [2024-07-15 18:42:36.951929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.599 [2024-07-15 18:42:36.951935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.599 [2024-07-15 18:42:36.951949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.599 qpair failed and we were unable to recover it. 00:27:51.599 [2024-07-15 18:42:36.961884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.599 [2024-07-15 18:42:36.961939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.599 [2024-07-15 18:42:36.961953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.599 [2024-07-15 18:42:36.961959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.599 [2024-07-15 18:42:36.961965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.599 [2024-07-15 18:42:36.961979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.599 qpair failed and we were unable to recover it. 00:27:51.599 [2024-07-15 18:42:36.971929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.599 [2024-07-15 18:42:36.971978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.599 [2024-07-15 18:42:36.971992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.599 [2024-07-15 18:42:36.971999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.599 [2024-07-15 18:42:36.972005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.599 [2024-07-15 18:42:36.972018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.599 qpair failed and we were unable to recover it. 00:27:51.599 [2024-07-15 18:42:36.981948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.599 [2024-07-15 18:42:36.982037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.599 [2024-07-15 18:42:36.982050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.599 [2024-07-15 18:42:36.982056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.599 [2024-07-15 18:42:36.982062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.599 [2024-07-15 18:42:36.982075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.599 qpair failed and we were unable to recover it. 00:27:51.599 [2024-07-15 18:42:36.992021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.599 [2024-07-15 18:42:36.992079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.599 [2024-07-15 18:42:36.992093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.599 [2024-07-15 18:42:36.992099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.599 [2024-07-15 18:42:36.992105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.599 [2024-07-15 18:42:36.992119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.599 qpair failed and we were unable to recover it. 00:27:51.599 [2024-07-15 18:42:37.002002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.599 [2024-07-15 18:42:37.002054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.599 [2024-07-15 18:42:37.002068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.600 [2024-07-15 18:42:37.002074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.600 [2024-07-15 18:42:37.002079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.600 [2024-07-15 18:42:37.002093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.600 qpair failed and we were unable to recover it. 00:27:51.600 [2024-07-15 18:42:37.012028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.600 [2024-07-15 18:42:37.012083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.600 [2024-07-15 18:42:37.012096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.600 [2024-07-15 18:42:37.012102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.600 [2024-07-15 18:42:37.012108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.600 [2024-07-15 18:42:37.012122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.600 qpair failed and we were unable to recover it. 00:27:51.600 [2024-07-15 18:42:37.021987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.600 [2024-07-15 18:42:37.022075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.600 [2024-07-15 18:42:37.022089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.600 [2024-07-15 18:42:37.022098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.600 [2024-07-15 18:42:37.022104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.600 [2024-07-15 18:42:37.022117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.600 qpair failed and we were unable to recover it. 00:27:51.600 [2024-07-15 18:42:37.032075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.600 [2024-07-15 18:42:37.032127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.600 [2024-07-15 18:42:37.032141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.600 [2024-07-15 18:42:37.032147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.600 [2024-07-15 18:42:37.032153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.600 [2024-07-15 18:42:37.032167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.600 qpair failed and we were unable to recover it. 00:27:51.600 [2024-07-15 18:42:37.042107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.600 [2024-07-15 18:42:37.042159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.600 [2024-07-15 18:42:37.042172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.600 [2024-07-15 18:42:37.042179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.600 [2024-07-15 18:42:37.042184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.600 [2024-07-15 18:42:37.042198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.600 qpair failed and we were unable to recover it. 00:27:51.600 [2024-07-15 18:42:37.052138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.600 [2024-07-15 18:42:37.052192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.600 [2024-07-15 18:42:37.052206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.600 [2024-07-15 18:42:37.052212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.600 [2024-07-15 18:42:37.052218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.600 [2024-07-15 18:42:37.052232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.600 qpair failed and we were unable to recover it. 00:27:51.600 [2024-07-15 18:42:37.062180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.600 [2024-07-15 18:42:37.062232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.600 [2024-07-15 18:42:37.062246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.600 [2024-07-15 18:42:37.062252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.600 [2024-07-15 18:42:37.062258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.600 [2024-07-15 18:42:37.062272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.600 qpair failed and we were unable to recover it. 00:27:51.600 [2024-07-15 18:42:37.072203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.600 [2024-07-15 18:42:37.072254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.600 [2024-07-15 18:42:37.072268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.600 [2024-07-15 18:42:37.072275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.600 [2024-07-15 18:42:37.072280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.600 [2024-07-15 18:42:37.072294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.600 qpair failed and we were unable to recover it. 00:27:51.600 [2024-07-15 18:42:37.082263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.600 [2024-07-15 18:42:37.082322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.600 [2024-07-15 18:42:37.082339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.600 [2024-07-15 18:42:37.082346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.600 [2024-07-15 18:42:37.082352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.600 [2024-07-15 18:42:37.082365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.600 qpair failed and we were unable to recover it. 00:27:51.600 [2024-07-15 18:42:37.092292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.600 [2024-07-15 18:42:37.092357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.600 [2024-07-15 18:42:37.092370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.600 [2024-07-15 18:42:37.092376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.600 [2024-07-15 18:42:37.092382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.600 [2024-07-15 18:42:37.092396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.600 qpair failed and we were unable to recover it. 00:27:51.600 [2024-07-15 18:42:37.102290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.600 [2024-07-15 18:42:37.102347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.600 [2024-07-15 18:42:37.102361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.600 [2024-07-15 18:42:37.102367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.600 [2024-07-15 18:42:37.102373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.600 [2024-07-15 18:42:37.102387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.600 qpair failed and we were unable to recover it. 00:27:51.600 [2024-07-15 18:42:37.112326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.600 [2024-07-15 18:42:37.112431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.600 [2024-07-15 18:42:37.112445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.600 [2024-07-15 18:42:37.112454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.600 [2024-07-15 18:42:37.112460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.600 [2024-07-15 18:42:37.112473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.600 qpair failed and we were unable to recover it. 00:27:51.600 [2024-07-15 18:42:37.122367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.600 [2024-07-15 18:42:37.122419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.600 [2024-07-15 18:42:37.122433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.600 [2024-07-15 18:42:37.122440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.600 [2024-07-15 18:42:37.122445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.600 [2024-07-15 18:42:37.122460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.600 qpair failed and we were unable to recover it. 00:27:51.600 [2024-07-15 18:42:37.132384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.600 [2024-07-15 18:42:37.132439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.600 [2024-07-15 18:42:37.132453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.600 [2024-07-15 18:42:37.132460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.600 [2024-07-15 18:42:37.132465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.600 [2024-07-15 18:42:37.132480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.600 qpair failed and we were unable to recover it. 00:27:51.600 [2024-07-15 18:42:37.142404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.600 [2024-07-15 18:42:37.142451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.601 [2024-07-15 18:42:37.142464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.601 [2024-07-15 18:42:37.142471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.601 [2024-07-15 18:42:37.142476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.601 [2024-07-15 18:42:37.142490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.601 qpair failed and we were unable to recover it. 00:27:51.601 [2024-07-15 18:42:37.152418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.601 [2024-07-15 18:42:37.152477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.601 [2024-07-15 18:42:37.152491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.601 [2024-07-15 18:42:37.152497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.601 [2024-07-15 18:42:37.152503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.601 [2024-07-15 18:42:37.152517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.601 qpair failed and we were unable to recover it. 00:27:51.860 [2024-07-15 18:42:37.162468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.860 [2024-07-15 18:42:37.162525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.860 [2024-07-15 18:42:37.162538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.860 [2024-07-15 18:42:37.162545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.860 [2024-07-15 18:42:37.162550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.860 [2024-07-15 18:42:37.162564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.860 qpair failed and we were unable to recover it. 00:27:51.860 [2024-07-15 18:42:37.172475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.860 [2024-07-15 18:42:37.172567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.860 [2024-07-15 18:42:37.172580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.860 [2024-07-15 18:42:37.172586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.860 [2024-07-15 18:42:37.172592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.860 [2024-07-15 18:42:37.172606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.860 qpair failed and we were unable to recover it. 00:27:51.860 [2024-07-15 18:42:37.182517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.860 [2024-07-15 18:42:37.182569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.860 [2024-07-15 18:42:37.182582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.860 [2024-07-15 18:42:37.182589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.860 [2024-07-15 18:42:37.182594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.860 [2024-07-15 18:42:37.182608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.860 qpair failed and we were unable to recover it. 00:27:51.860 [2024-07-15 18:42:37.192570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.860 [2024-07-15 18:42:37.192635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.860 [2024-07-15 18:42:37.192649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.860 [2024-07-15 18:42:37.192655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.860 [2024-07-15 18:42:37.192661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.860 [2024-07-15 18:42:37.192675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.860 qpair failed and we were unable to recover it. 00:27:51.860 [2024-07-15 18:42:37.202588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.860 [2024-07-15 18:42:37.202639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.860 [2024-07-15 18:42:37.202656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.860 [2024-07-15 18:42:37.202662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.860 [2024-07-15 18:42:37.202668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.860 [2024-07-15 18:42:37.202681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.860 qpair failed and we were unable to recover it. 00:27:51.860 [2024-07-15 18:42:37.212614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.860 [2024-07-15 18:42:37.212672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.860 [2024-07-15 18:42:37.212686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.860 [2024-07-15 18:42:37.212693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.860 [2024-07-15 18:42:37.212699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.860 [2024-07-15 18:42:37.212712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.860 qpair failed and we were unable to recover it. 00:27:51.860 [2024-07-15 18:42:37.222626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.860 [2024-07-15 18:42:37.222676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.860 [2024-07-15 18:42:37.222689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.860 [2024-07-15 18:42:37.222695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.860 [2024-07-15 18:42:37.222701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.860 [2024-07-15 18:42:37.222715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.860 qpair failed and we were unable to recover it. 00:27:51.860 [2024-07-15 18:42:37.232659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.860 [2024-07-15 18:42:37.232704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.860 [2024-07-15 18:42:37.232719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.860 [2024-07-15 18:42:37.232725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.860 [2024-07-15 18:42:37.232731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.860 [2024-07-15 18:42:37.232744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.860 qpair failed and we were unable to recover it. 00:27:51.860 [2024-07-15 18:42:37.242667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.860 [2024-07-15 18:42:37.242727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.860 [2024-07-15 18:42:37.242741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.860 [2024-07-15 18:42:37.242747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.860 [2024-07-15 18:42:37.242753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.860 [2024-07-15 18:42:37.242770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.860 qpair failed and we were unable to recover it. 00:27:51.860 [2024-07-15 18:42:37.252712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.860 [2024-07-15 18:42:37.252766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.860 [2024-07-15 18:42:37.252780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.860 [2024-07-15 18:42:37.252787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.860 [2024-07-15 18:42:37.252792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.860 [2024-07-15 18:42:37.252807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.860 qpair failed and we were unable to recover it. 00:27:51.860 [2024-07-15 18:42:37.262777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.860 [2024-07-15 18:42:37.262830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.860 [2024-07-15 18:42:37.262844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.860 [2024-07-15 18:42:37.262850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.860 [2024-07-15 18:42:37.262856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.860 [2024-07-15 18:42:37.262870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.860 qpair failed and we were unable to recover it. 00:27:51.860 [2024-07-15 18:42:37.272827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.860 [2024-07-15 18:42:37.272887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.860 [2024-07-15 18:42:37.272901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.860 [2024-07-15 18:42:37.272907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.860 [2024-07-15 18:42:37.272913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.860 [2024-07-15 18:42:37.272927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.860 qpair failed and we were unable to recover it. 00:27:51.860 [2024-07-15 18:42:37.282849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.860 [2024-07-15 18:42:37.282910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.860 [2024-07-15 18:42:37.282924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.860 [2024-07-15 18:42:37.282930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.860 [2024-07-15 18:42:37.282936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.860 [2024-07-15 18:42:37.282949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.860 qpair failed and we were unable to recover it. 00:27:51.860 [2024-07-15 18:42:37.292829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.860 [2024-07-15 18:42:37.292888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.860 [2024-07-15 18:42:37.292904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.861 [2024-07-15 18:42:37.292910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.861 [2024-07-15 18:42:37.292916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.861 [2024-07-15 18:42:37.292929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.861 qpair failed and we were unable to recover it. 00:27:51.861 [2024-07-15 18:42:37.302859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.861 [2024-07-15 18:42:37.302911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.861 [2024-07-15 18:42:37.302925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.861 [2024-07-15 18:42:37.302931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.861 [2024-07-15 18:42:37.302937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.861 [2024-07-15 18:42:37.302950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.861 qpair failed and we were unable to recover it. 00:27:51.861 [2024-07-15 18:42:37.312939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.861 [2024-07-15 18:42:37.312999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.861 [2024-07-15 18:42:37.313012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.861 [2024-07-15 18:42:37.313019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.861 [2024-07-15 18:42:37.313024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.861 [2024-07-15 18:42:37.313038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.861 qpair failed and we were unable to recover it. 00:27:51.861 [2024-07-15 18:42:37.322921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.861 [2024-07-15 18:42:37.322976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.861 [2024-07-15 18:42:37.322990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.861 [2024-07-15 18:42:37.322996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.861 [2024-07-15 18:42:37.323002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.861 [2024-07-15 18:42:37.323016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.861 qpair failed and we were unable to recover it. 00:27:51.861 [2024-07-15 18:42:37.332946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.861 [2024-07-15 18:42:37.333001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.861 [2024-07-15 18:42:37.333015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.861 [2024-07-15 18:42:37.333021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.861 [2024-07-15 18:42:37.333030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.861 [2024-07-15 18:42:37.333044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.861 qpair failed and we were unable to recover it. 00:27:51.861 [2024-07-15 18:42:37.342912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.861 [2024-07-15 18:42:37.342964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.861 [2024-07-15 18:42:37.342979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.861 [2024-07-15 18:42:37.342985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.861 [2024-07-15 18:42:37.342991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.861 [2024-07-15 18:42:37.343005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.861 qpair failed and we were unable to recover it. 00:27:51.861 [2024-07-15 18:42:37.352921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.861 [2024-07-15 18:42:37.352977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.861 [2024-07-15 18:42:37.352991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.861 [2024-07-15 18:42:37.352998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.861 [2024-07-15 18:42:37.353004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.861 [2024-07-15 18:42:37.353018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.861 qpair failed and we were unable to recover it. 00:27:51.861 [2024-07-15 18:42:37.363045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.861 [2024-07-15 18:42:37.363122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.861 [2024-07-15 18:42:37.363136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.861 [2024-07-15 18:42:37.363143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.861 [2024-07-15 18:42:37.363148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.861 [2024-07-15 18:42:37.363162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.861 qpair failed and we were unable to recover it. 00:27:51.861 [2024-07-15 18:42:37.373025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.861 [2024-07-15 18:42:37.373078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.861 [2024-07-15 18:42:37.373092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.861 [2024-07-15 18:42:37.373098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.861 [2024-07-15 18:42:37.373104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.861 [2024-07-15 18:42:37.373117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.861 qpair failed and we were unable to recover it. 00:27:51.861 [2024-07-15 18:42:37.383087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.861 [2024-07-15 18:42:37.383143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.861 [2024-07-15 18:42:37.383156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.861 [2024-07-15 18:42:37.383163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.861 [2024-07-15 18:42:37.383168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.861 [2024-07-15 18:42:37.383182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.861 qpair failed and we were unable to recover it. 00:27:51.861 [2024-07-15 18:42:37.393117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.861 [2024-07-15 18:42:37.393167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.861 [2024-07-15 18:42:37.393180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.861 [2024-07-15 18:42:37.393186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.861 [2024-07-15 18:42:37.393192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.861 [2024-07-15 18:42:37.393205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.861 qpair failed and we were unable to recover it. 00:27:51.861 [2024-07-15 18:42:37.403099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.861 [2024-07-15 18:42:37.403156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.861 [2024-07-15 18:42:37.403170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.861 [2024-07-15 18:42:37.403177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.861 [2024-07-15 18:42:37.403182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.861 [2024-07-15 18:42:37.403196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.861 qpair failed and we were unable to recover it. 00:27:51.861 [2024-07-15 18:42:37.413173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.861 [2024-07-15 18:42:37.413231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.861 [2024-07-15 18:42:37.413244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.861 [2024-07-15 18:42:37.413251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.861 [2024-07-15 18:42:37.413257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:51.861 [2024-07-15 18:42:37.413271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.861 qpair failed and we were unable to recover it. 00:27:52.121 [2024-07-15 18:42:37.423238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.121 [2024-07-15 18:42:37.423298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.121 [2024-07-15 18:42:37.423312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.121 [2024-07-15 18:42:37.423322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.121 [2024-07-15 18:42:37.423327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.121 [2024-07-15 18:42:37.423344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.121 qpair failed and we were unable to recover it. 00:27:52.121 [2024-07-15 18:42:37.433222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.121 [2024-07-15 18:42:37.433269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.121 [2024-07-15 18:42:37.433283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.121 [2024-07-15 18:42:37.433289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.121 [2024-07-15 18:42:37.433295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.121 [2024-07-15 18:42:37.433309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.121 qpair failed and we were unable to recover it. 00:27:52.121 [2024-07-15 18:42:37.443271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.121 [2024-07-15 18:42:37.443335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.121 [2024-07-15 18:42:37.443352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.121 [2024-07-15 18:42:37.443359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.121 [2024-07-15 18:42:37.443365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.121 [2024-07-15 18:42:37.443380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.121 qpair failed and we were unable to recover it. 00:27:52.121 [2024-07-15 18:42:37.453288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.121 [2024-07-15 18:42:37.453344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.121 [2024-07-15 18:42:37.453359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.121 [2024-07-15 18:42:37.453365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.121 [2024-07-15 18:42:37.453371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.121 [2024-07-15 18:42:37.453386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.121 qpair failed and we were unable to recover it. 00:27:52.121 [2024-07-15 18:42:37.463305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.121 [2024-07-15 18:42:37.463358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.121 [2024-07-15 18:42:37.463372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.121 [2024-07-15 18:42:37.463378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.121 [2024-07-15 18:42:37.463384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.121 [2024-07-15 18:42:37.463398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.121 qpair failed and we were unable to recover it. 00:27:52.121 [2024-07-15 18:42:37.473375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.121 [2024-07-15 18:42:37.473429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.121 [2024-07-15 18:42:37.473444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.121 [2024-07-15 18:42:37.473451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.121 [2024-07-15 18:42:37.473457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.121 [2024-07-15 18:42:37.473471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.121 qpair failed and we were unable to recover it. 00:27:52.121 [2024-07-15 18:42:37.483382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.121 [2024-07-15 18:42:37.483434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.121 [2024-07-15 18:42:37.483448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.121 [2024-07-15 18:42:37.483454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.121 [2024-07-15 18:42:37.483460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.121 [2024-07-15 18:42:37.483474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.121 qpair failed and we were unable to recover it. 00:27:52.121 [2024-07-15 18:42:37.493413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.121 [2024-07-15 18:42:37.493464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.121 [2024-07-15 18:42:37.493478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.121 [2024-07-15 18:42:37.493484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.121 [2024-07-15 18:42:37.493490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.121 [2024-07-15 18:42:37.493504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.121 qpair failed and we were unable to recover it. 00:27:52.121 [2024-07-15 18:42:37.503429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.121 [2024-07-15 18:42:37.503488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.121 [2024-07-15 18:42:37.503502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.121 [2024-07-15 18:42:37.503508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.121 [2024-07-15 18:42:37.503514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.121 [2024-07-15 18:42:37.503528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.121 qpair failed and we were unable to recover it. 00:27:52.121 [2024-07-15 18:42:37.513459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.121 [2024-07-15 18:42:37.513512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.121 [2024-07-15 18:42:37.513527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.121 [2024-07-15 18:42:37.513536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.121 [2024-07-15 18:42:37.513541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.121 [2024-07-15 18:42:37.513555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.121 qpair failed and we were unable to recover it. 00:27:52.121 [2024-07-15 18:42:37.523499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.121 [2024-07-15 18:42:37.523552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.121 [2024-07-15 18:42:37.523566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.121 [2024-07-15 18:42:37.523572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.121 [2024-07-15 18:42:37.523578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.121 [2024-07-15 18:42:37.523592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.121 qpair failed and we were unable to recover it. 00:27:52.121 [2024-07-15 18:42:37.533523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.121 [2024-07-15 18:42:37.533589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.121 [2024-07-15 18:42:37.533603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.121 [2024-07-15 18:42:37.533609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.121 [2024-07-15 18:42:37.533614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.121 [2024-07-15 18:42:37.533628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.121 qpair failed and we were unable to recover it. 00:27:52.121 [2024-07-15 18:42:37.543559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.121 [2024-07-15 18:42:37.543610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.121 [2024-07-15 18:42:37.543625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.121 [2024-07-15 18:42:37.543631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.122 [2024-07-15 18:42:37.543636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.122 [2024-07-15 18:42:37.543650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.122 qpair failed and we were unable to recover it. 00:27:52.122 [2024-07-15 18:42:37.553578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.122 [2024-07-15 18:42:37.553629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.122 [2024-07-15 18:42:37.553643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.122 [2024-07-15 18:42:37.553649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.122 [2024-07-15 18:42:37.553655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.122 [2024-07-15 18:42:37.553668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.122 qpair failed and we were unable to recover it. 00:27:52.122 [2024-07-15 18:42:37.563650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.122 [2024-07-15 18:42:37.563709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.122 [2024-07-15 18:42:37.563723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.122 [2024-07-15 18:42:37.563729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.122 [2024-07-15 18:42:37.563735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.122 [2024-07-15 18:42:37.563748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.122 qpair failed and we were unable to recover it. 00:27:52.122 [2024-07-15 18:42:37.573631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.122 [2024-07-15 18:42:37.573685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.122 [2024-07-15 18:42:37.573699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.122 [2024-07-15 18:42:37.573705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.122 [2024-07-15 18:42:37.573711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.122 [2024-07-15 18:42:37.573724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.122 qpair failed and we were unable to recover it. 00:27:52.122 [2024-07-15 18:42:37.583659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.122 [2024-07-15 18:42:37.583708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.122 [2024-07-15 18:42:37.583721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.122 [2024-07-15 18:42:37.583727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.122 [2024-07-15 18:42:37.583733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.122 [2024-07-15 18:42:37.583747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.122 qpair failed and we were unable to recover it. 00:27:52.122 [2024-07-15 18:42:37.593663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.122 [2024-07-15 18:42:37.593711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.122 [2024-07-15 18:42:37.593725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.122 [2024-07-15 18:42:37.593731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.122 [2024-07-15 18:42:37.593737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.122 [2024-07-15 18:42:37.593750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.122 qpair failed and we were unable to recover it. 00:27:52.122 [2024-07-15 18:42:37.603730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.122 [2024-07-15 18:42:37.603797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.122 [2024-07-15 18:42:37.603814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.122 [2024-07-15 18:42:37.603820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.122 [2024-07-15 18:42:37.603825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.122 [2024-07-15 18:42:37.603839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.122 qpair failed and we were unable to recover it. 00:27:52.122 [2024-07-15 18:42:37.613745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.122 [2024-07-15 18:42:37.613799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.122 [2024-07-15 18:42:37.613813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.122 [2024-07-15 18:42:37.613819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.122 [2024-07-15 18:42:37.613825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.122 [2024-07-15 18:42:37.613838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.122 qpair failed and we were unable to recover it. 00:27:52.122 [2024-07-15 18:42:37.623803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.122 [2024-07-15 18:42:37.623864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.122 [2024-07-15 18:42:37.623877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.122 [2024-07-15 18:42:37.623883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.122 [2024-07-15 18:42:37.623889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.122 [2024-07-15 18:42:37.623903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.122 qpair failed and we were unable to recover it. 00:27:52.122 [2024-07-15 18:42:37.633805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.122 [2024-07-15 18:42:37.633853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.122 [2024-07-15 18:42:37.633867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.122 [2024-07-15 18:42:37.633873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.122 [2024-07-15 18:42:37.633878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.122 [2024-07-15 18:42:37.633892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.122 qpair failed and we were unable to recover it. 00:27:52.122 [2024-07-15 18:42:37.643834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.122 [2024-07-15 18:42:37.643883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.122 [2024-07-15 18:42:37.643896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.122 [2024-07-15 18:42:37.643903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.122 [2024-07-15 18:42:37.643908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.122 [2024-07-15 18:42:37.643927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.122 qpair failed and we were unable to recover it. 00:27:52.122 [2024-07-15 18:42:37.653883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.122 [2024-07-15 18:42:37.653956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.122 [2024-07-15 18:42:37.653970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.122 [2024-07-15 18:42:37.653977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.122 [2024-07-15 18:42:37.653982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.122 [2024-07-15 18:42:37.653996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.122 qpair failed and we were unable to recover it. 00:27:52.122 [2024-07-15 18:42:37.663909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.122 [2024-07-15 18:42:37.663962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.122 [2024-07-15 18:42:37.663976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.122 [2024-07-15 18:42:37.663982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.122 [2024-07-15 18:42:37.663988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.122 [2024-07-15 18:42:37.664002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.122 qpair failed and we were unable to recover it. 00:27:52.122 [2024-07-15 18:42:37.673857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.122 [2024-07-15 18:42:37.673910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.122 [2024-07-15 18:42:37.673924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.122 [2024-07-15 18:42:37.673931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.122 [2024-07-15 18:42:37.673937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.122 [2024-07-15 18:42:37.673951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.122 qpair failed and we were unable to recover it. 00:27:52.382 [2024-07-15 18:42:37.683910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.382 [2024-07-15 18:42:37.683966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.382 [2024-07-15 18:42:37.683980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.382 [2024-07-15 18:42:37.683986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.382 [2024-07-15 18:42:37.683992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.382 [2024-07-15 18:42:37.684006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.382 qpair failed and we were unable to recover it. 00:27:52.382 [2024-07-15 18:42:37.693979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.382 [2024-07-15 18:42:37.694034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.382 [2024-07-15 18:42:37.694052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.382 [2024-07-15 18:42:37.694058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.382 [2024-07-15 18:42:37.694064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.382 [2024-07-15 18:42:37.694079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.382 qpair failed and we were unable to recover it. 00:27:52.382 [2024-07-15 18:42:37.703940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.382 [2024-07-15 18:42:37.704034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.382 [2024-07-15 18:42:37.704050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.382 [2024-07-15 18:42:37.704057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.382 [2024-07-15 18:42:37.704062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.382 [2024-07-15 18:42:37.704077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.382 qpair failed and we were unable to recover it. 00:27:52.382 [2024-07-15 18:42:37.714079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.382 [2024-07-15 18:42:37.714134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.382 [2024-07-15 18:42:37.714148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.382 [2024-07-15 18:42:37.714154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.382 [2024-07-15 18:42:37.714160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.382 [2024-07-15 18:42:37.714174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.382 qpair failed and we were unable to recover it. 00:27:52.382 [2024-07-15 18:42:37.724043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.382 [2024-07-15 18:42:37.724098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.382 [2024-07-15 18:42:37.724112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.382 [2024-07-15 18:42:37.724120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.382 [2024-07-15 18:42:37.724126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.382 [2024-07-15 18:42:37.724140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.382 qpair failed and we were unable to recover it. 00:27:52.382 [2024-07-15 18:42:37.734106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.382 [2024-07-15 18:42:37.734160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.383 [2024-07-15 18:42:37.734174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.383 [2024-07-15 18:42:37.734181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.383 [2024-07-15 18:42:37.734190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.383 [2024-07-15 18:42:37.734204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.383 qpair failed and we were unable to recover it. 00:27:52.383 [2024-07-15 18:42:37.744159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.383 [2024-07-15 18:42:37.744225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.383 [2024-07-15 18:42:37.744240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.383 [2024-07-15 18:42:37.744247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.383 [2024-07-15 18:42:37.744253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.383 [2024-07-15 18:42:37.744266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.383 qpair failed and we were unable to recover it. 00:27:52.383 [2024-07-15 18:42:37.754134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.383 [2024-07-15 18:42:37.754196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.383 [2024-07-15 18:42:37.754211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.383 [2024-07-15 18:42:37.754218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.383 [2024-07-15 18:42:37.754224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.383 [2024-07-15 18:42:37.754238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.383 qpair failed and we were unable to recover it. 00:27:52.383 [2024-07-15 18:42:37.764179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.383 [2024-07-15 18:42:37.764235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.383 [2024-07-15 18:42:37.764250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.383 [2024-07-15 18:42:37.764257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.383 [2024-07-15 18:42:37.764263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.383 [2024-07-15 18:42:37.764278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.383 qpair failed and we were unable to recover it. 00:27:52.383 [2024-07-15 18:42:37.774172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.383 [2024-07-15 18:42:37.774229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.383 [2024-07-15 18:42:37.774242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.383 [2024-07-15 18:42:37.774250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.383 [2024-07-15 18:42:37.774256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.383 [2024-07-15 18:42:37.774270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.383 qpair failed and we were unable to recover it. 00:27:52.383 [2024-07-15 18:42:37.784237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.383 [2024-07-15 18:42:37.784295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.383 [2024-07-15 18:42:37.784309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.383 [2024-07-15 18:42:37.784316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.383 [2024-07-15 18:42:37.784322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.383 [2024-07-15 18:42:37.784340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.383 qpair failed and we were unable to recover it. 00:27:52.383 [2024-07-15 18:42:37.794278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.383 [2024-07-15 18:42:37.794365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.383 [2024-07-15 18:42:37.794380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.383 [2024-07-15 18:42:37.794387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.383 [2024-07-15 18:42:37.794392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.383 [2024-07-15 18:42:37.794407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.383 qpair failed and we were unable to recover it. 00:27:52.383 [2024-07-15 18:42:37.804239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.383 [2024-07-15 18:42:37.804295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.383 [2024-07-15 18:42:37.804309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.383 [2024-07-15 18:42:37.804315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.383 [2024-07-15 18:42:37.804321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.383 [2024-07-15 18:42:37.804340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.383 qpair failed and we were unable to recover it. 00:27:52.383 [2024-07-15 18:42:37.814330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.383 [2024-07-15 18:42:37.814385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.383 [2024-07-15 18:42:37.814399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.383 [2024-07-15 18:42:37.814406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.383 [2024-07-15 18:42:37.814411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.383 [2024-07-15 18:42:37.814426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.383 qpair failed and we were unable to recover it. 00:27:52.383 [2024-07-15 18:42:37.824349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.383 [2024-07-15 18:42:37.824403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.383 [2024-07-15 18:42:37.824417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.383 [2024-07-15 18:42:37.824424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.383 [2024-07-15 18:42:37.824433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.383 [2024-07-15 18:42:37.824448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.383 qpair failed and we were unable to recover it. 00:27:52.383 [2024-07-15 18:42:37.834405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.383 [2024-07-15 18:42:37.834462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.383 [2024-07-15 18:42:37.834476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.383 [2024-07-15 18:42:37.834484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.383 [2024-07-15 18:42:37.834490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.383 [2024-07-15 18:42:37.834505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.383 qpair failed and we were unable to recover it. 00:27:52.383 [2024-07-15 18:42:37.844461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.383 [2024-07-15 18:42:37.844527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.383 [2024-07-15 18:42:37.844541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.383 [2024-07-15 18:42:37.844548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.383 [2024-07-15 18:42:37.844554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.383 [2024-07-15 18:42:37.844568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.383 qpair failed and we were unable to recover it. 00:27:52.383 [2024-07-15 18:42:37.854429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.383 [2024-07-15 18:42:37.854486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.383 [2024-07-15 18:42:37.854500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.383 [2024-07-15 18:42:37.854508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.383 [2024-07-15 18:42:37.854514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.383 [2024-07-15 18:42:37.854529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.383 qpair failed and we were unable to recover it. 00:27:52.383 [2024-07-15 18:42:37.864509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.383 [2024-07-15 18:42:37.864583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.383 [2024-07-15 18:42:37.864598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.383 [2024-07-15 18:42:37.864605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.383 [2024-07-15 18:42:37.864610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.383 [2024-07-15 18:42:37.864625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.383 qpair failed and we were unable to recover it. 00:27:52.383 [2024-07-15 18:42:37.874528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.383 [2024-07-15 18:42:37.874585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.383 [2024-07-15 18:42:37.874601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.383 [2024-07-15 18:42:37.874609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.383 [2024-07-15 18:42:37.874616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.384 [2024-07-15 18:42:37.874632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.384 qpair failed and we were unable to recover it. 00:27:52.384 [2024-07-15 18:42:37.884529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.384 [2024-07-15 18:42:37.884583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.384 [2024-07-15 18:42:37.884597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.384 [2024-07-15 18:42:37.884604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.384 [2024-07-15 18:42:37.884610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.384 [2024-07-15 18:42:37.884625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.384 qpair failed and we were unable to recover it. 00:27:52.384 [2024-07-15 18:42:37.894544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.384 [2024-07-15 18:42:37.894599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.384 [2024-07-15 18:42:37.894613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.384 [2024-07-15 18:42:37.894619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.384 [2024-07-15 18:42:37.894626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.384 [2024-07-15 18:42:37.894640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.384 qpair failed and we were unable to recover it. 00:27:52.384 [2024-07-15 18:42:37.904592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.384 [2024-07-15 18:42:37.904651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.384 [2024-07-15 18:42:37.904665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.384 [2024-07-15 18:42:37.904671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.384 [2024-07-15 18:42:37.904678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.384 [2024-07-15 18:42:37.904692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.384 qpair failed and we were unable to recover it. 00:27:52.384 [2024-07-15 18:42:37.914585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.384 [2024-07-15 18:42:37.914639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.384 [2024-07-15 18:42:37.914653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.384 [2024-07-15 18:42:37.914664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.384 [2024-07-15 18:42:37.914670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.384 [2024-07-15 18:42:37.914684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.384 qpair failed and we were unable to recover it. 00:27:52.384 [2024-07-15 18:42:37.924592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.384 [2024-07-15 18:42:37.924647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.384 [2024-07-15 18:42:37.924661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.384 [2024-07-15 18:42:37.924667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.384 [2024-07-15 18:42:37.924674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.384 [2024-07-15 18:42:37.924688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.384 qpair failed and we were unable to recover it. 00:27:52.384 [2024-07-15 18:42:37.934699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.384 [2024-07-15 18:42:37.934779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.384 [2024-07-15 18:42:37.934793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.384 [2024-07-15 18:42:37.934800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.384 [2024-07-15 18:42:37.934806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.384 [2024-07-15 18:42:37.934820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.384 qpair failed and we were unable to recover it. 00:27:52.643 [2024-07-15 18:42:37.944697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.643 [2024-07-15 18:42:37.944752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.643 [2024-07-15 18:42:37.944766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.643 [2024-07-15 18:42:37.944772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.643 [2024-07-15 18:42:37.944779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.643 [2024-07-15 18:42:37.944793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.643 qpair failed and we were unable to recover it. 00:27:52.643 [2024-07-15 18:42:37.954677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.643 [2024-07-15 18:42:37.954732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.643 [2024-07-15 18:42:37.954746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.643 [2024-07-15 18:42:37.954754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.643 [2024-07-15 18:42:37.954760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.643 [2024-07-15 18:42:37.954774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.643 qpair failed and we were unable to recover it. 00:27:52.643 [2024-07-15 18:42:37.964718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.643 [2024-07-15 18:42:37.964770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.643 [2024-07-15 18:42:37.964784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.643 [2024-07-15 18:42:37.964790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.643 [2024-07-15 18:42:37.964796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.643 [2024-07-15 18:42:37.964811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.643 qpair failed and we were unable to recover it. 00:27:52.643 [2024-07-15 18:42:37.974795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.643 [2024-07-15 18:42:37.974852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.643 [2024-07-15 18:42:37.974867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.643 [2024-07-15 18:42:37.974874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.643 [2024-07-15 18:42:37.974880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.643 [2024-07-15 18:42:37.974895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.643 qpair failed and we were unable to recover it. 00:27:52.643 [2024-07-15 18:42:37.984767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.643 [2024-07-15 18:42:37.984829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.643 [2024-07-15 18:42:37.984843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.643 [2024-07-15 18:42:37.984851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.643 [2024-07-15 18:42:37.984856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.643 [2024-07-15 18:42:37.984871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.643 qpair failed and we were unable to recover it. 00:27:52.643 [2024-07-15 18:42:37.994842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.643 [2024-07-15 18:42:37.994942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.643 [2024-07-15 18:42:37.994957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.643 [2024-07-15 18:42:37.994964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.643 [2024-07-15 18:42:37.994971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.643 [2024-07-15 18:42:37.994985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.644 qpair failed and we were unable to recover it. 00:27:52.644 [2024-07-15 18:42:38.004817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.644 [2024-07-15 18:42:38.004873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.644 [2024-07-15 18:42:38.004891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.644 [2024-07-15 18:42:38.004898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.644 [2024-07-15 18:42:38.004905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.644 [2024-07-15 18:42:38.004920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.644 qpair failed and we were unable to recover it. 00:27:52.644 [2024-07-15 18:42:38.014918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.644 [2024-07-15 18:42:38.014970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.644 [2024-07-15 18:42:38.014984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.644 [2024-07-15 18:42:38.014991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.644 [2024-07-15 18:42:38.014997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.644 [2024-07-15 18:42:38.015012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.644 qpair failed and we were unable to recover it. 00:27:52.644 [2024-07-15 18:42:38.024910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.644 [2024-07-15 18:42:38.024975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.644 [2024-07-15 18:42:38.024990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.644 [2024-07-15 18:42:38.024996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.644 [2024-07-15 18:42:38.025003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.644 [2024-07-15 18:42:38.025017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.644 qpair failed and we were unable to recover it. 00:27:52.644 [2024-07-15 18:42:38.034973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.644 [2024-07-15 18:42:38.035027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.644 [2024-07-15 18:42:38.035042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.644 [2024-07-15 18:42:38.035048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.644 [2024-07-15 18:42:38.035055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.644 [2024-07-15 18:42:38.035070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.644 qpair failed and we were unable to recover it. 00:27:52.644 [2024-07-15 18:42:38.044997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.644 [2024-07-15 18:42:38.045054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.644 [2024-07-15 18:42:38.045069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.644 [2024-07-15 18:42:38.045077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.644 [2024-07-15 18:42:38.045083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.644 [2024-07-15 18:42:38.045100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.644 qpair failed and we were unable to recover it. 00:27:52.644 [2024-07-15 18:42:38.055029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.644 [2024-07-15 18:42:38.055095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.644 [2024-07-15 18:42:38.055110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.644 [2024-07-15 18:42:38.055117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.644 [2024-07-15 18:42:38.055123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.644 [2024-07-15 18:42:38.055137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.644 qpair failed and we were unable to recover it. 00:27:52.644 [2024-07-15 18:42:38.065050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.644 [2024-07-15 18:42:38.065099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.644 [2024-07-15 18:42:38.065114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.644 [2024-07-15 18:42:38.065120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.644 [2024-07-15 18:42:38.065127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.644 [2024-07-15 18:42:38.065141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.644 qpair failed and we were unable to recover it. 00:27:52.644 [2024-07-15 18:42:38.075060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.644 [2024-07-15 18:42:38.075115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.644 [2024-07-15 18:42:38.075129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.644 [2024-07-15 18:42:38.075136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.644 [2024-07-15 18:42:38.075143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.644 [2024-07-15 18:42:38.075157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.644 qpair failed and we were unable to recover it. 00:27:52.644 [2024-07-15 18:42:38.085121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.644 [2024-07-15 18:42:38.085171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.644 [2024-07-15 18:42:38.085185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.644 [2024-07-15 18:42:38.085192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.644 [2024-07-15 18:42:38.085198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.644 [2024-07-15 18:42:38.085212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.644 qpair failed and we were unable to recover it. 00:27:52.644 [2024-07-15 18:42:38.095160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.644 [2024-07-15 18:42:38.095215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.644 [2024-07-15 18:42:38.095232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.644 [2024-07-15 18:42:38.095239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.644 [2024-07-15 18:42:38.095245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.644 [2024-07-15 18:42:38.095259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.644 qpair failed and we were unable to recover it. 00:27:52.644 [2024-07-15 18:42:38.105166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.644 [2024-07-15 18:42:38.105215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.644 [2024-07-15 18:42:38.105229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.644 [2024-07-15 18:42:38.105236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.644 [2024-07-15 18:42:38.105242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.644 [2024-07-15 18:42:38.105256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.644 qpair failed and we were unable to recover it. 00:27:52.644 [2024-07-15 18:42:38.115199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.644 [2024-07-15 18:42:38.115252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.644 [2024-07-15 18:42:38.115267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.644 [2024-07-15 18:42:38.115273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.644 [2024-07-15 18:42:38.115280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.644 [2024-07-15 18:42:38.115295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.644 qpair failed and we were unable to recover it. 00:27:52.644 [2024-07-15 18:42:38.125226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.644 [2024-07-15 18:42:38.125276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.644 [2024-07-15 18:42:38.125290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.644 [2024-07-15 18:42:38.125296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.644 [2024-07-15 18:42:38.125303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.644 [2024-07-15 18:42:38.125317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.644 qpair failed and we were unable to recover it. 00:27:52.644 [2024-07-15 18:42:38.135267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.644 [2024-07-15 18:42:38.135318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.644 [2024-07-15 18:42:38.135333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.644 [2024-07-15 18:42:38.135343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.644 [2024-07-15 18:42:38.135352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.644 [2024-07-15 18:42:38.135367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.644 qpair failed and we were unable to recover it. 00:27:52.644 [2024-07-15 18:42:38.145273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.645 [2024-07-15 18:42:38.145327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.645 [2024-07-15 18:42:38.145346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.645 [2024-07-15 18:42:38.145353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.645 [2024-07-15 18:42:38.145359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.645 [2024-07-15 18:42:38.145374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.645 qpair failed and we were unable to recover it. 00:27:52.645 [2024-07-15 18:42:38.155335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.645 [2024-07-15 18:42:38.155401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.645 [2024-07-15 18:42:38.155415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.645 [2024-07-15 18:42:38.155422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.645 [2024-07-15 18:42:38.155429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.645 [2024-07-15 18:42:38.155443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.645 qpair failed and we were unable to recover it. 00:27:52.645 [2024-07-15 18:42:38.165346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.645 [2024-07-15 18:42:38.165405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.645 [2024-07-15 18:42:38.165419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.645 [2024-07-15 18:42:38.165426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.645 [2024-07-15 18:42:38.165432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.645 [2024-07-15 18:42:38.165447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.645 qpair failed and we were unable to recover it. 00:27:52.645 [2024-07-15 18:42:38.175298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.645 [2024-07-15 18:42:38.175356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.645 [2024-07-15 18:42:38.175370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.645 [2024-07-15 18:42:38.175377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.645 [2024-07-15 18:42:38.175384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.645 [2024-07-15 18:42:38.175399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.645 qpair failed and we were unable to recover it. 00:27:52.645 [2024-07-15 18:42:38.185391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.645 [2024-07-15 18:42:38.185448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.645 [2024-07-15 18:42:38.185463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.645 [2024-07-15 18:42:38.185470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.645 [2024-07-15 18:42:38.185476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.645 [2024-07-15 18:42:38.185491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.645 qpair failed and we were unable to recover it. 00:27:52.645 [2024-07-15 18:42:38.195416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.645 [2024-07-15 18:42:38.195471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.645 [2024-07-15 18:42:38.195485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.645 [2024-07-15 18:42:38.195493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.645 [2024-07-15 18:42:38.195499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.645 [2024-07-15 18:42:38.195514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.645 qpair failed and we were unable to recover it. 00:27:52.904 [2024-07-15 18:42:38.205449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.904 [2024-07-15 18:42:38.205510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.904 [2024-07-15 18:42:38.205524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.904 [2024-07-15 18:42:38.205531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.904 [2024-07-15 18:42:38.205537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.904 [2024-07-15 18:42:38.205551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.904 qpair failed and we were unable to recover it. 00:27:52.904 [2024-07-15 18:42:38.215477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.904 [2024-07-15 18:42:38.215533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.904 [2024-07-15 18:42:38.215548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.904 [2024-07-15 18:42:38.215555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.904 [2024-07-15 18:42:38.215561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.904 [2024-07-15 18:42:38.215575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.904 qpair failed and we were unable to recover it. 00:27:52.904 [2024-07-15 18:42:38.225511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.904 [2024-07-15 18:42:38.225561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.904 [2024-07-15 18:42:38.225575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.904 [2024-07-15 18:42:38.225582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.904 [2024-07-15 18:42:38.225591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.904 [2024-07-15 18:42:38.225605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.904 qpair failed and we were unable to recover it. 00:27:52.904 [2024-07-15 18:42:38.235471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.904 [2024-07-15 18:42:38.235534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.904 [2024-07-15 18:42:38.235548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.904 [2024-07-15 18:42:38.235555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.904 [2024-07-15 18:42:38.235561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.904 [2024-07-15 18:42:38.235576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.904 qpair failed and we were unable to recover it. 00:27:52.904 [2024-07-15 18:42:38.245574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.904 [2024-07-15 18:42:38.245649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.904 [2024-07-15 18:42:38.245663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.904 [2024-07-15 18:42:38.245671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.904 [2024-07-15 18:42:38.245677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.904 [2024-07-15 18:42:38.245691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.904 qpair failed and we were unable to recover it. 00:27:52.904 [2024-07-15 18:42:38.255602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.904 [2024-07-15 18:42:38.255657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.904 [2024-07-15 18:42:38.255671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.904 [2024-07-15 18:42:38.255679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.904 [2024-07-15 18:42:38.255685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.904 [2024-07-15 18:42:38.255699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.904 qpair failed and we were unable to recover it. 00:27:52.904 [2024-07-15 18:42:38.265621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.904 [2024-07-15 18:42:38.265675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.904 [2024-07-15 18:42:38.265689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.904 [2024-07-15 18:42:38.265697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.904 [2024-07-15 18:42:38.265704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.904 [2024-07-15 18:42:38.265718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.904 qpair failed and we were unable to recover it. 00:27:52.904 [2024-07-15 18:42:38.275580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.904 [2024-07-15 18:42:38.275632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.904 [2024-07-15 18:42:38.275646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.904 [2024-07-15 18:42:38.275652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.904 [2024-07-15 18:42:38.275660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.904 [2024-07-15 18:42:38.275674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.904 qpair failed and we were unable to recover it. 00:27:52.904 [2024-07-15 18:42:38.285688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.904 [2024-07-15 18:42:38.285741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.904 [2024-07-15 18:42:38.285756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.904 [2024-07-15 18:42:38.285763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.904 [2024-07-15 18:42:38.285769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.904 [2024-07-15 18:42:38.285783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.904 qpair failed and we were unable to recover it. 00:27:52.904 [2024-07-15 18:42:38.295703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.904 [2024-07-15 18:42:38.295759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.904 [2024-07-15 18:42:38.295773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.904 [2024-07-15 18:42:38.295780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.904 [2024-07-15 18:42:38.295786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.904 [2024-07-15 18:42:38.295800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.904 qpair failed and we were unable to recover it. 00:27:52.904 [2024-07-15 18:42:38.305719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.904 [2024-07-15 18:42:38.305770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.904 [2024-07-15 18:42:38.305784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.904 [2024-07-15 18:42:38.305791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.904 [2024-07-15 18:42:38.305797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.905 [2024-07-15 18:42:38.305811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.905 qpair failed and we were unable to recover it. 00:27:52.905 [2024-07-15 18:42:38.315759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.905 [2024-07-15 18:42:38.315810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.905 [2024-07-15 18:42:38.315824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.905 [2024-07-15 18:42:38.315835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.905 [2024-07-15 18:42:38.315841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.905 [2024-07-15 18:42:38.315854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.905 qpair failed and we were unable to recover it. 00:27:52.905 [2024-07-15 18:42:38.325801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.905 [2024-07-15 18:42:38.325856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.905 [2024-07-15 18:42:38.325871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.905 [2024-07-15 18:42:38.325878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.905 [2024-07-15 18:42:38.325884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.905 [2024-07-15 18:42:38.325898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.905 qpair failed and we were unable to recover it. 00:27:52.905 [2024-07-15 18:42:38.335816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.905 [2024-07-15 18:42:38.335870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.905 [2024-07-15 18:42:38.335884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.905 [2024-07-15 18:42:38.335891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.905 [2024-07-15 18:42:38.335897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.905 [2024-07-15 18:42:38.335911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.905 qpair failed and we were unable to recover it. 00:27:52.905 [2024-07-15 18:42:38.345805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.905 [2024-07-15 18:42:38.345854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.905 [2024-07-15 18:42:38.345868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.905 [2024-07-15 18:42:38.345875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.905 [2024-07-15 18:42:38.345881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.905 [2024-07-15 18:42:38.345895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.905 qpair failed and we were unable to recover it. 00:27:52.905 [2024-07-15 18:42:38.355853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.905 [2024-07-15 18:42:38.355911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.905 [2024-07-15 18:42:38.355926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.905 [2024-07-15 18:42:38.355933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.905 [2024-07-15 18:42:38.355940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.905 [2024-07-15 18:42:38.355954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.905 qpair failed and we were unable to recover it. 00:27:52.905 [2024-07-15 18:42:38.365885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.905 [2024-07-15 18:42:38.365940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.905 [2024-07-15 18:42:38.365955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.905 [2024-07-15 18:42:38.365962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.905 [2024-07-15 18:42:38.365969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.905 [2024-07-15 18:42:38.365983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.905 qpair failed and we were unable to recover it. 00:27:52.905 [2024-07-15 18:42:38.375945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.905 [2024-07-15 18:42:38.376034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.905 [2024-07-15 18:42:38.376048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.905 [2024-07-15 18:42:38.376055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.905 [2024-07-15 18:42:38.376061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.905 [2024-07-15 18:42:38.376075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.905 qpair failed and we were unable to recover it. 00:27:52.905 [2024-07-15 18:42:38.385939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.905 [2024-07-15 18:42:38.385991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.905 [2024-07-15 18:42:38.386006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.905 [2024-07-15 18:42:38.386013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.905 [2024-07-15 18:42:38.386019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.905 [2024-07-15 18:42:38.386034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.905 qpair failed and we were unable to recover it. 00:27:52.905 [2024-07-15 18:42:38.395980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.905 [2024-07-15 18:42:38.396027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.905 [2024-07-15 18:42:38.396041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.905 [2024-07-15 18:42:38.396048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.905 [2024-07-15 18:42:38.396055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.905 [2024-07-15 18:42:38.396069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.905 qpair failed and we were unable to recover it. 00:27:52.905 [2024-07-15 18:42:38.405971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.905 [2024-07-15 18:42:38.406040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.905 [2024-07-15 18:42:38.406057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.905 [2024-07-15 18:42:38.406064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.905 [2024-07-15 18:42:38.406070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.905 [2024-07-15 18:42:38.406084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.905 qpair failed and we were unable to recover it. 00:27:52.905 [2024-07-15 18:42:38.416050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.905 [2024-07-15 18:42:38.416107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.905 [2024-07-15 18:42:38.416121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.905 [2024-07-15 18:42:38.416128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.905 [2024-07-15 18:42:38.416134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.905 [2024-07-15 18:42:38.416148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.905 qpair failed and we were unable to recover it. 00:27:52.905 [2024-07-15 18:42:38.426031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.905 [2024-07-15 18:42:38.426079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.905 [2024-07-15 18:42:38.426093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.905 [2024-07-15 18:42:38.426100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.905 [2024-07-15 18:42:38.426107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.905 [2024-07-15 18:42:38.426122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.905 qpair failed and we were unable to recover it. 00:27:52.905 [2024-07-15 18:42:38.436068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.905 [2024-07-15 18:42:38.436118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.905 [2024-07-15 18:42:38.436133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.905 [2024-07-15 18:42:38.436140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.905 [2024-07-15 18:42:38.436146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.905 [2024-07-15 18:42:38.436161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.905 qpair failed and we were unable to recover it. 00:27:52.905 [2024-07-15 18:42:38.446143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.905 [2024-07-15 18:42:38.446196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.905 [2024-07-15 18:42:38.446211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.905 [2024-07-15 18:42:38.446217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.905 [2024-07-15 18:42:38.446224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.905 [2024-07-15 18:42:38.446241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.905 qpair failed and we were unable to recover it. 00:27:52.905 [2024-07-15 18:42:38.456151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.905 [2024-07-15 18:42:38.456207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.906 [2024-07-15 18:42:38.456221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.906 [2024-07-15 18:42:38.456229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.906 [2024-07-15 18:42:38.456235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:52.906 [2024-07-15 18:42:38.456250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.906 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 18:42:38.466221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.164 [2024-07-15 18:42:38.466290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.164 [2024-07-15 18:42:38.466306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.164 [2024-07-15 18:42:38.466313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.164 [2024-07-15 18:42:38.466319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.164 [2024-07-15 18:42:38.466335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 18:42:38.476200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.164 [2024-07-15 18:42:38.476260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.164 [2024-07-15 18:42:38.476274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.164 [2024-07-15 18:42:38.476282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.164 [2024-07-15 18:42:38.476288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.164 [2024-07-15 18:42:38.476303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 18:42:38.486266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.164 [2024-07-15 18:42:38.486323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.164 [2024-07-15 18:42:38.486341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.164 [2024-07-15 18:42:38.486348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.164 [2024-07-15 18:42:38.486355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.164 [2024-07-15 18:42:38.486370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 18:42:38.496312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.164 [2024-07-15 18:42:38.496373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.164 [2024-07-15 18:42:38.496391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.164 [2024-07-15 18:42:38.496398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.164 [2024-07-15 18:42:38.496404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.164 [2024-07-15 18:42:38.496419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 18:42:38.506222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.164 [2024-07-15 18:42:38.506279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.164 [2024-07-15 18:42:38.506293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.164 [2024-07-15 18:42:38.506300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.164 [2024-07-15 18:42:38.506306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.164 [2024-07-15 18:42:38.506320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 18:42:38.516293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.164 [2024-07-15 18:42:38.516346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.164 [2024-07-15 18:42:38.516361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.164 [2024-07-15 18:42:38.516368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.164 [2024-07-15 18:42:38.516374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.164 [2024-07-15 18:42:38.516388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 18:42:38.526381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.164 [2024-07-15 18:42:38.526431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.164 [2024-07-15 18:42:38.526445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.164 [2024-07-15 18:42:38.526452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.164 [2024-07-15 18:42:38.526458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.164 [2024-07-15 18:42:38.526472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 18:42:38.536377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.164 [2024-07-15 18:42:38.536432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.164 [2024-07-15 18:42:38.536446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.164 [2024-07-15 18:42:38.536453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.164 [2024-07-15 18:42:38.536459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.164 [2024-07-15 18:42:38.536476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 18:42:38.546401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.164 [2024-07-15 18:42:38.546459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.164 [2024-07-15 18:42:38.546474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.164 [2024-07-15 18:42:38.546481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.164 [2024-07-15 18:42:38.546487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.164 [2024-07-15 18:42:38.546502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 18:42:38.556429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.164 [2024-07-15 18:42:38.556481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.164 [2024-07-15 18:42:38.556495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.164 [2024-07-15 18:42:38.556503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.164 [2024-07-15 18:42:38.556509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.164 [2024-07-15 18:42:38.556524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 18:42:38.566464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.165 [2024-07-15 18:42:38.566527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.165 [2024-07-15 18:42:38.566541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.165 [2024-07-15 18:42:38.566548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.165 [2024-07-15 18:42:38.566554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.165 [2024-07-15 18:42:38.566569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 18:42:38.576481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.165 [2024-07-15 18:42:38.576535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.165 [2024-07-15 18:42:38.576550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.165 [2024-07-15 18:42:38.576556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.165 [2024-07-15 18:42:38.576563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.165 [2024-07-15 18:42:38.576577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 18:42:38.586513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.165 [2024-07-15 18:42:38.586570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.165 [2024-07-15 18:42:38.586584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.165 [2024-07-15 18:42:38.586591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.165 [2024-07-15 18:42:38.586597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.165 [2024-07-15 18:42:38.586611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 18:42:38.596473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.165 [2024-07-15 18:42:38.596527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.165 [2024-07-15 18:42:38.596541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.165 [2024-07-15 18:42:38.596548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.165 [2024-07-15 18:42:38.596555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.165 [2024-07-15 18:42:38.596569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 18:42:38.606591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.165 [2024-07-15 18:42:38.606644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.165 [2024-07-15 18:42:38.606659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.165 [2024-07-15 18:42:38.606666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.165 [2024-07-15 18:42:38.606673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.165 [2024-07-15 18:42:38.606687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 18:42:38.616601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.165 [2024-07-15 18:42:38.616655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.165 [2024-07-15 18:42:38.616669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.165 [2024-07-15 18:42:38.616676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.165 [2024-07-15 18:42:38.616682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.165 [2024-07-15 18:42:38.616697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 18:42:38.626626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.165 [2024-07-15 18:42:38.626675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.165 [2024-07-15 18:42:38.626688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.165 [2024-07-15 18:42:38.626695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.165 [2024-07-15 18:42:38.626704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.165 [2024-07-15 18:42:38.626719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 18:42:38.636582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.165 [2024-07-15 18:42:38.636629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.165 [2024-07-15 18:42:38.636643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.165 [2024-07-15 18:42:38.636650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.165 [2024-07-15 18:42:38.636656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.165 [2024-07-15 18:42:38.636670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 18:42:38.646677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.165 [2024-07-15 18:42:38.646741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.165 [2024-07-15 18:42:38.646755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.165 [2024-07-15 18:42:38.646763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.165 [2024-07-15 18:42:38.646768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.165 [2024-07-15 18:42:38.646782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 18:42:38.656719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.165 [2024-07-15 18:42:38.656775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.165 [2024-07-15 18:42:38.656789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.165 [2024-07-15 18:42:38.656796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.165 [2024-07-15 18:42:38.656802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.165 [2024-07-15 18:42:38.656816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 18:42:38.666728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.165 [2024-07-15 18:42:38.666781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.165 [2024-07-15 18:42:38.666795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.165 [2024-07-15 18:42:38.666802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.165 [2024-07-15 18:42:38.666808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.165 [2024-07-15 18:42:38.666822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 18:42:38.676759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.165 [2024-07-15 18:42:38.676822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.165 [2024-07-15 18:42:38.676837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.165 [2024-07-15 18:42:38.676844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.165 [2024-07-15 18:42:38.676850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.165 [2024-07-15 18:42:38.676864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 18:42:38.686837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.165 [2024-07-15 18:42:38.686944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.165 [2024-07-15 18:42:38.686958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.165 [2024-07-15 18:42:38.686965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.165 [2024-07-15 18:42:38.686971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.165 [2024-07-15 18:42:38.686986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 18:42:38.696836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.165 [2024-07-15 18:42:38.696887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.165 [2024-07-15 18:42:38.696901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.165 [2024-07-15 18:42:38.696907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.165 [2024-07-15 18:42:38.696913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.165 [2024-07-15 18:42:38.696928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 18:42:38.706856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.165 [2024-07-15 18:42:38.706912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.165 [2024-07-15 18:42:38.706926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.165 [2024-07-15 18:42:38.706933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.165 [2024-07-15 18:42:38.706939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.165 [2024-07-15 18:42:38.706954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 18:42:38.716822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.165 [2024-07-15 18:42:38.716911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.165 [2024-07-15 18:42:38.716924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.165 [2024-07-15 18:42:38.716934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.165 [2024-07-15 18:42:38.716940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.165 [2024-07-15 18:42:38.716954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.423 [2024-07-15 18:42:38.726920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.423 [2024-07-15 18:42:38.726978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.423 [2024-07-15 18:42:38.726992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.423 [2024-07-15 18:42:38.726998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.424 [2024-07-15 18:42:38.727005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.424 [2024-07-15 18:42:38.727019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.424 qpair failed and we were unable to recover it. 00:27:53.424 [2024-07-15 18:42:38.736935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.424 [2024-07-15 18:42:38.737002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.424 [2024-07-15 18:42:38.737016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.424 [2024-07-15 18:42:38.737023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.424 [2024-07-15 18:42:38.737029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.424 [2024-07-15 18:42:38.737043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.424 qpair failed and we were unable to recover it. 00:27:53.424 [2024-07-15 18:42:38.746963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.424 [2024-07-15 18:42:38.747016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.424 [2024-07-15 18:42:38.747030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.424 [2024-07-15 18:42:38.747037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.424 [2024-07-15 18:42:38.747044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.424 [2024-07-15 18:42:38.747058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.424 qpair failed and we were unable to recover it. 00:27:53.424 [2024-07-15 18:42:38.756987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.424 [2024-07-15 18:42:38.757039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.424 [2024-07-15 18:42:38.757053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.424 [2024-07-15 18:42:38.757059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.424 [2024-07-15 18:42:38.757066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.424 [2024-07-15 18:42:38.757081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.424 qpair failed and we were unable to recover it. 00:27:53.424 [2024-07-15 18:42:38.767038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.424 [2024-07-15 18:42:38.767089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.424 [2024-07-15 18:42:38.767103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.424 [2024-07-15 18:42:38.767110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.424 [2024-07-15 18:42:38.767116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.424 [2024-07-15 18:42:38.767130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.424 qpair failed and we were unable to recover it. 00:27:53.424 [2024-07-15 18:42:38.777075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.424 [2024-07-15 18:42:38.777139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.424 [2024-07-15 18:42:38.777153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.424 [2024-07-15 18:42:38.777160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.424 [2024-07-15 18:42:38.777166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.424 [2024-07-15 18:42:38.777181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.424 qpair failed and we were unable to recover it. 00:27:53.424 [2024-07-15 18:42:38.787084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.424 [2024-07-15 18:42:38.787139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.424 [2024-07-15 18:42:38.787153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.424 [2024-07-15 18:42:38.787160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.424 [2024-07-15 18:42:38.787166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.424 [2024-07-15 18:42:38.787180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.424 qpair failed and we were unable to recover it. 00:27:53.424 [2024-07-15 18:42:38.797114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.424 [2024-07-15 18:42:38.797166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.424 [2024-07-15 18:42:38.797179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.424 [2024-07-15 18:42:38.797186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.424 [2024-07-15 18:42:38.797193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.424 [2024-07-15 18:42:38.797208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.424 qpair failed and we were unable to recover it. 00:27:53.424 [2024-07-15 18:42:38.807155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.424 [2024-07-15 18:42:38.807209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.424 [2024-07-15 18:42:38.807223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.424 [2024-07-15 18:42:38.807233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.424 [2024-07-15 18:42:38.807239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.424 [2024-07-15 18:42:38.807253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.424 qpair failed and we were unable to recover it. 00:27:53.424 [2024-07-15 18:42:38.817174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.424 [2024-07-15 18:42:38.817267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.424 [2024-07-15 18:42:38.817282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.424 [2024-07-15 18:42:38.817289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.424 [2024-07-15 18:42:38.817296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.424 [2024-07-15 18:42:38.817310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.424 qpair failed and we were unable to recover it. 00:27:53.424 [2024-07-15 18:42:38.827195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.424 [2024-07-15 18:42:38.827243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.424 [2024-07-15 18:42:38.827258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.424 [2024-07-15 18:42:38.827265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.424 [2024-07-15 18:42:38.827271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.424 [2024-07-15 18:42:38.827285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.424 qpair failed and we were unable to recover it. 00:27:53.424 [2024-07-15 18:42:38.837237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.424 [2024-07-15 18:42:38.837300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.424 [2024-07-15 18:42:38.837315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.424 [2024-07-15 18:42:38.837322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.424 [2024-07-15 18:42:38.837328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.424 [2024-07-15 18:42:38.837346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.424 qpair failed and we were unable to recover it. 00:27:53.424 [2024-07-15 18:42:38.847265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.424 [2024-07-15 18:42:38.847318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.424 [2024-07-15 18:42:38.847331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.424 [2024-07-15 18:42:38.847341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.424 [2024-07-15 18:42:38.847347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.424 [2024-07-15 18:42:38.847362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.424 qpair failed and we were unable to recover it. 00:27:53.424 [2024-07-15 18:42:38.857290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.424 [2024-07-15 18:42:38.857349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.425 [2024-07-15 18:42:38.857363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.425 [2024-07-15 18:42:38.857371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.425 [2024-07-15 18:42:38.857377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.425 [2024-07-15 18:42:38.857392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.425 qpair failed and we were unable to recover it. 00:27:53.425 [2024-07-15 18:42:38.867318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.425 [2024-07-15 18:42:38.867370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.425 [2024-07-15 18:42:38.867385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.425 [2024-07-15 18:42:38.867392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.425 [2024-07-15 18:42:38.867398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.425 [2024-07-15 18:42:38.867412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.425 qpair failed and we were unable to recover it. 00:27:53.425 [2024-07-15 18:42:38.877340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.425 [2024-07-15 18:42:38.877420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.425 [2024-07-15 18:42:38.877435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.425 [2024-07-15 18:42:38.877442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.425 [2024-07-15 18:42:38.877448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.425 [2024-07-15 18:42:38.877462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.425 qpair failed and we were unable to recover it. 00:27:53.425 [2024-07-15 18:42:38.887304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.425 [2024-07-15 18:42:38.887363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.425 [2024-07-15 18:42:38.887377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.425 [2024-07-15 18:42:38.887384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.425 [2024-07-15 18:42:38.887390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.425 [2024-07-15 18:42:38.887404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.425 qpair failed and we were unable to recover it. 00:27:53.425 [2024-07-15 18:42:38.897398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.425 [2024-07-15 18:42:38.897447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.425 [2024-07-15 18:42:38.897465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.425 [2024-07-15 18:42:38.897473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.425 [2024-07-15 18:42:38.897479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.425 [2024-07-15 18:42:38.897493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.425 qpair failed and we were unable to recover it. 00:27:53.425 [2024-07-15 18:42:38.907434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.425 [2024-07-15 18:42:38.907491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.425 [2024-07-15 18:42:38.907505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.425 [2024-07-15 18:42:38.907512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.425 [2024-07-15 18:42:38.907518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.425 [2024-07-15 18:42:38.907533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.425 qpair failed and we were unable to recover it. 00:27:53.425 [2024-07-15 18:42:38.917450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.425 [2024-07-15 18:42:38.917507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.425 [2024-07-15 18:42:38.917521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.425 [2024-07-15 18:42:38.917530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.425 [2024-07-15 18:42:38.917536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.425 [2024-07-15 18:42:38.917550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.425 qpair failed and we were unable to recover it. 00:27:53.425 [2024-07-15 18:42:38.927490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.425 [2024-07-15 18:42:38.927543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.425 [2024-07-15 18:42:38.927557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.425 [2024-07-15 18:42:38.927564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.425 [2024-07-15 18:42:38.927570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.425 [2024-07-15 18:42:38.927584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.425 qpair failed and we were unable to recover it. 00:27:53.425 [2024-07-15 18:42:38.937541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.425 [2024-07-15 18:42:38.937597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.425 [2024-07-15 18:42:38.937611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.425 [2024-07-15 18:42:38.937618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.425 [2024-07-15 18:42:38.937624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.425 [2024-07-15 18:42:38.937642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.425 qpair failed and we were unable to recover it. 00:27:53.425 [2024-07-15 18:42:38.947496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.425 [2024-07-15 18:42:38.947552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.425 [2024-07-15 18:42:38.947567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.425 [2024-07-15 18:42:38.947574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.425 [2024-07-15 18:42:38.947580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.425 [2024-07-15 18:42:38.947595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.425 qpair failed and we were unable to recover it. 00:27:53.425 [2024-07-15 18:42:38.957572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.425 [2024-07-15 18:42:38.957626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.425 [2024-07-15 18:42:38.957640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.425 [2024-07-15 18:42:38.957648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.425 [2024-07-15 18:42:38.957654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.425 [2024-07-15 18:42:38.957668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.425 qpair failed and we were unable to recover it. 00:27:53.425 [2024-07-15 18:42:38.967605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.425 [2024-07-15 18:42:38.967658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.425 [2024-07-15 18:42:38.967672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.425 [2024-07-15 18:42:38.967679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.426 [2024-07-15 18:42:38.967685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.426 [2024-07-15 18:42:38.967700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.426 qpair failed and we were unable to recover it. 00:27:53.426 [2024-07-15 18:42:38.977603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.426 [2024-07-15 18:42:38.977660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.426 [2024-07-15 18:42:38.977674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.426 [2024-07-15 18:42:38.977682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.426 [2024-07-15 18:42:38.977688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.426 [2024-07-15 18:42:38.977702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.426 qpair failed and we were unable to recover it. 00:27:53.684 [2024-07-15 18:42:38.987706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.684 [2024-07-15 18:42:38.987804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.684 [2024-07-15 18:42:38.987821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.684 [2024-07-15 18:42:38.987828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.684 [2024-07-15 18:42:38.987834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.684 [2024-07-15 18:42:38.987848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.684 qpair failed and we were unable to recover it. 00:27:53.684 [2024-07-15 18:42:38.997692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.684 [2024-07-15 18:42:38.997739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.684 [2024-07-15 18:42:38.997752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.684 [2024-07-15 18:42:38.997759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.684 [2024-07-15 18:42:38.997765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.684 [2024-07-15 18:42:38.997780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.684 qpair failed and we were unable to recover it. 00:27:53.684 [2024-07-15 18:42:39.007699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.684 [2024-07-15 18:42:39.007751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.684 [2024-07-15 18:42:39.007766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.684 [2024-07-15 18:42:39.007773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.684 [2024-07-15 18:42:39.007779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.684 [2024-07-15 18:42:39.007794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.685 qpair failed and we were unable to recover it. 00:27:53.685 [2024-07-15 18:42:39.017713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.685 [2024-07-15 18:42:39.017766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.685 [2024-07-15 18:42:39.017780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.685 [2024-07-15 18:42:39.017787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.685 [2024-07-15 18:42:39.017794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.685 [2024-07-15 18:42:39.017809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.685 qpair failed and we were unable to recover it. 00:27:53.685 [2024-07-15 18:42:39.027785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.685 [2024-07-15 18:42:39.027863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.685 [2024-07-15 18:42:39.027878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.685 [2024-07-15 18:42:39.027885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.685 [2024-07-15 18:42:39.027893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.685 [2024-07-15 18:42:39.027908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.685 qpair failed and we were unable to recover it. 00:27:53.685 [2024-07-15 18:42:39.037778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.685 [2024-07-15 18:42:39.037834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.685 [2024-07-15 18:42:39.037849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.685 [2024-07-15 18:42:39.037856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.685 [2024-07-15 18:42:39.037862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.685 [2024-07-15 18:42:39.037876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.685 qpair failed and we were unable to recover it. 00:27:53.685 [2024-07-15 18:42:39.047823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.685 [2024-07-15 18:42:39.047874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.685 [2024-07-15 18:42:39.047888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.685 [2024-07-15 18:42:39.047895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.685 [2024-07-15 18:42:39.047901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.685 [2024-07-15 18:42:39.047915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.685 qpair failed and we were unable to recover it. 00:27:53.685 [2024-07-15 18:42:39.057853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.685 [2024-07-15 18:42:39.057906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.685 [2024-07-15 18:42:39.057921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.685 [2024-07-15 18:42:39.057927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.685 [2024-07-15 18:42:39.057934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.685 [2024-07-15 18:42:39.057948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.685 qpair failed and we were unable to recover it. 00:27:53.685 [2024-07-15 18:42:39.067859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.685 [2024-07-15 18:42:39.067908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.685 [2024-07-15 18:42:39.067922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.685 [2024-07-15 18:42:39.067929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.685 [2024-07-15 18:42:39.067935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.685 [2024-07-15 18:42:39.067949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.685 qpair failed and we were unable to recover it. 00:27:53.685 [2024-07-15 18:42:39.077885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.685 [2024-07-15 18:42:39.077942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.685 [2024-07-15 18:42:39.077956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.685 [2024-07-15 18:42:39.077963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.685 [2024-07-15 18:42:39.077969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.685 [2024-07-15 18:42:39.077983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.685 qpair failed and we were unable to recover it. 00:27:53.685 [2024-07-15 18:42:39.087923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.685 [2024-07-15 18:42:39.087984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.685 [2024-07-15 18:42:39.087998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.685 [2024-07-15 18:42:39.088006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.685 [2024-07-15 18:42:39.088012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.685 [2024-07-15 18:42:39.088027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.685 qpair failed and we were unable to recover it. 00:27:53.685 [2024-07-15 18:42:39.097937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.685 [2024-07-15 18:42:39.097988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.685 [2024-07-15 18:42:39.098002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.685 [2024-07-15 18:42:39.098008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.685 [2024-07-15 18:42:39.098014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.685 [2024-07-15 18:42:39.098029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.685 qpair failed and we were unable to recover it. 00:27:53.685 [2024-07-15 18:42:39.107905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.685 [2024-07-15 18:42:39.107960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.685 [2024-07-15 18:42:39.107974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.685 [2024-07-15 18:42:39.107981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.685 [2024-07-15 18:42:39.107987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.685 [2024-07-15 18:42:39.108001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.685 qpair failed and we were unable to recover it. 00:27:53.685 [2024-07-15 18:42:39.117983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.685 [2024-07-15 18:42:39.118034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.685 [2024-07-15 18:42:39.118048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.685 [2024-07-15 18:42:39.118061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.685 [2024-07-15 18:42:39.118067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.685 [2024-07-15 18:42:39.118081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.685 qpair failed and we were unable to recover it. 00:27:53.685 [2024-07-15 18:42:39.127955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.685 [2024-07-15 18:42:39.128022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.685 [2024-07-15 18:42:39.128037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.685 [2024-07-15 18:42:39.128044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.685 [2024-07-15 18:42:39.128050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.685 [2024-07-15 18:42:39.128065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.685 qpair failed and we were unable to recover it. 00:27:53.685 [2024-07-15 18:42:39.138097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.685 [2024-07-15 18:42:39.138196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.685 [2024-07-15 18:42:39.138210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.685 [2024-07-15 18:42:39.138217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.685 [2024-07-15 18:42:39.138223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.685 [2024-07-15 18:42:39.138238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.685 qpair failed and we were unable to recover it. 00:27:53.685 [2024-07-15 18:42:39.148096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.685 [2024-07-15 18:42:39.148148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.685 [2024-07-15 18:42:39.148162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.685 [2024-07-15 18:42:39.148169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.685 [2024-07-15 18:42:39.148176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.685 [2024-07-15 18:42:39.148190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.685 qpair failed and we were unable to recover it. 00:27:53.685 [2024-07-15 18:42:39.158114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.685 [2024-07-15 18:42:39.158174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.686 [2024-07-15 18:42:39.158188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.686 [2024-07-15 18:42:39.158195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.686 [2024-07-15 18:42:39.158202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.686 [2024-07-15 18:42:39.158217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.686 qpair failed and we were unable to recover it. 00:27:53.686 [2024-07-15 18:42:39.168182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.686 [2024-07-15 18:42:39.168246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.686 [2024-07-15 18:42:39.168260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.686 [2024-07-15 18:42:39.168267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.686 [2024-07-15 18:42:39.168273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.686 [2024-07-15 18:42:39.168289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.686 qpair failed and we were unable to recover it. 00:27:53.686 [2024-07-15 18:42:39.178111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.686 [2024-07-15 18:42:39.178166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.686 [2024-07-15 18:42:39.178180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.686 [2024-07-15 18:42:39.178187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.686 [2024-07-15 18:42:39.178194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.686 [2024-07-15 18:42:39.178208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.686 qpair failed and we were unable to recover it. 00:27:53.686 [2024-07-15 18:42:39.188181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.686 [2024-07-15 18:42:39.188261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.686 [2024-07-15 18:42:39.188276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.686 [2024-07-15 18:42:39.188283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.686 [2024-07-15 18:42:39.188289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.686 [2024-07-15 18:42:39.188303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.686 qpair failed and we were unable to recover it. 00:27:53.686 [2024-07-15 18:42:39.198203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.686 [2024-07-15 18:42:39.198250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.686 [2024-07-15 18:42:39.198265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.686 [2024-07-15 18:42:39.198272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.686 [2024-07-15 18:42:39.198278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.686 [2024-07-15 18:42:39.198293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.686 qpair failed and we were unable to recover it. 00:27:53.686 [2024-07-15 18:42:39.208260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.686 [2024-07-15 18:42:39.208316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.686 [2024-07-15 18:42:39.208331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.686 [2024-07-15 18:42:39.208345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.686 [2024-07-15 18:42:39.208353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.686 [2024-07-15 18:42:39.208368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.686 qpair failed and we were unable to recover it. 00:27:53.686 [2024-07-15 18:42:39.218269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.686 [2024-07-15 18:42:39.218322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.686 [2024-07-15 18:42:39.218340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.686 [2024-07-15 18:42:39.218347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.686 [2024-07-15 18:42:39.218353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.686 [2024-07-15 18:42:39.218367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.686 qpair failed and we were unable to recover it. 00:27:53.686 [2024-07-15 18:42:39.228300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.686 [2024-07-15 18:42:39.228478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.686 [2024-07-15 18:42:39.228494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.686 [2024-07-15 18:42:39.228501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.686 [2024-07-15 18:42:39.228507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.686 [2024-07-15 18:42:39.228522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.686 qpair failed and we were unable to recover it. 00:27:53.686 [2024-07-15 18:42:39.238323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.686 [2024-07-15 18:42:39.238381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.686 [2024-07-15 18:42:39.238395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.686 [2024-07-15 18:42:39.238402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.686 [2024-07-15 18:42:39.238408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.686 [2024-07-15 18:42:39.238423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.686 qpair failed and we were unable to recover it. 00:27:53.944 [2024-07-15 18:42:39.248366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.944 [2024-07-15 18:42:39.248424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.944 [2024-07-15 18:42:39.248439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.944 [2024-07-15 18:42:39.248446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.944 [2024-07-15 18:42:39.248452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.944 [2024-07-15 18:42:39.248467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-07-15 18:42:39.258375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.944 [2024-07-15 18:42:39.258430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.944 [2024-07-15 18:42:39.258445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.944 [2024-07-15 18:42:39.258451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.944 [2024-07-15 18:42:39.258458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.944 [2024-07-15 18:42:39.258472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-07-15 18:42:39.268360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.944 [2024-07-15 18:42:39.268425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.944 [2024-07-15 18:42:39.268446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.944 [2024-07-15 18:42:39.268458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.944 [2024-07-15 18:42:39.268468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.944 [2024-07-15 18:42:39.268487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-07-15 18:42:39.278445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.944 [2024-07-15 18:42:39.278498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.944 [2024-07-15 18:42:39.278514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.944 [2024-07-15 18:42:39.278521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.944 [2024-07-15 18:42:39.278528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.944 [2024-07-15 18:42:39.278543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-07-15 18:42:39.288501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.944 [2024-07-15 18:42:39.288558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.944 [2024-07-15 18:42:39.288573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.944 [2024-07-15 18:42:39.288581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.944 [2024-07-15 18:42:39.288587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.944 [2024-07-15 18:42:39.288602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-07-15 18:42:39.298492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.944 [2024-07-15 18:42:39.298547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.944 [2024-07-15 18:42:39.298564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.944 [2024-07-15 18:42:39.298571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.944 [2024-07-15 18:42:39.298578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.944 [2024-07-15 18:42:39.298592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-07-15 18:42:39.308534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.944 [2024-07-15 18:42:39.308587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.944 [2024-07-15 18:42:39.308601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.944 [2024-07-15 18:42:39.308609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.944 [2024-07-15 18:42:39.308616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.944 [2024-07-15 18:42:39.308631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-07-15 18:42:39.318590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.944 [2024-07-15 18:42:39.318649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.944 [2024-07-15 18:42:39.318666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.944 [2024-07-15 18:42:39.318673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.944 [2024-07-15 18:42:39.318679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.945 [2024-07-15 18:42:39.318695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-07-15 18:42:39.328524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.945 [2024-07-15 18:42:39.328582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.945 [2024-07-15 18:42:39.328597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.945 [2024-07-15 18:42:39.328604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.945 [2024-07-15 18:42:39.328610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.945 [2024-07-15 18:42:39.328625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-07-15 18:42:39.338547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.945 [2024-07-15 18:42:39.338597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.945 [2024-07-15 18:42:39.338612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.945 [2024-07-15 18:42:39.338618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.945 [2024-07-15 18:42:39.338624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.945 [2024-07-15 18:42:39.338642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-07-15 18:42:39.348617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.945 [2024-07-15 18:42:39.348706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.945 [2024-07-15 18:42:39.348721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.945 [2024-07-15 18:42:39.348728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.945 [2024-07-15 18:42:39.348734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.945 [2024-07-15 18:42:39.348749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-07-15 18:42:39.358603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.945 [2024-07-15 18:42:39.358654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.945 [2024-07-15 18:42:39.358668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.945 [2024-07-15 18:42:39.358675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.945 [2024-07-15 18:42:39.358682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.945 [2024-07-15 18:42:39.358696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-07-15 18:42:39.368702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.945 [2024-07-15 18:42:39.368758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.945 [2024-07-15 18:42:39.368772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.945 [2024-07-15 18:42:39.368780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.945 [2024-07-15 18:42:39.368786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.945 [2024-07-15 18:42:39.368800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-07-15 18:42:39.378658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.945 [2024-07-15 18:42:39.378713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.945 [2024-07-15 18:42:39.378728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.945 [2024-07-15 18:42:39.378735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.945 [2024-07-15 18:42:39.378743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.945 [2024-07-15 18:42:39.378758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-07-15 18:42:39.388754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.945 [2024-07-15 18:42:39.388808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.945 [2024-07-15 18:42:39.388826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.945 [2024-07-15 18:42:39.388833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.945 [2024-07-15 18:42:39.388839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.945 [2024-07-15 18:42:39.388854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-07-15 18:42:39.398759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.945 [2024-07-15 18:42:39.398810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.945 [2024-07-15 18:42:39.398825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.945 [2024-07-15 18:42:39.398832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.945 [2024-07-15 18:42:39.398839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.945 [2024-07-15 18:42:39.398853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-07-15 18:42:39.408818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.945 [2024-07-15 18:42:39.408911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.945 [2024-07-15 18:42:39.408925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.945 [2024-07-15 18:42:39.408931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.945 [2024-07-15 18:42:39.408937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.945 [2024-07-15 18:42:39.408951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-07-15 18:42:39.418786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.945 [2024-07-15 18:42:39.418840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.945 [2024-07-15 18:42:39.418854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.945 [2024-07-15 18:42:39.418861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.945 [2024-07-15 18:42:39.418868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.945 [2024-07-15 18:42:39.418882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-07-15 18:42:39.428866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.945 [2024-07-15 18:42:39.428918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.945 [2024-07-15 18:42:39.428932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.945 [2024-07-15 18:42:39.428939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.945 [2024-07-15 18:42:39.428948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.945 [2024-07-15 18:42:39.428962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-07-15 18:42:39.438886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.945 [2024-07-15 18:42:39.438937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.945 [2024-07-15 18:42:39.438950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.945 [2024-07-15 18:42:39.438957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.945 [2024-07-15 18:42:39.438964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.945 [2024-07-15 18:42:39.438979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-07-15 18:42:39.448943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.945 [2024-07-15 18:42:39.449002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.945 [2024-07-15 18:42:39.449016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.945 [2024-07-15 18:42:39.449024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.945 [2024-07-15 18:42:39.449030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.945 [2024-07-15 18:42:39.449044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-07-15 18:42:39.458925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.945 [2024-07-15 18:42:39.459021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.945 [2024-07-15 18:42:39.459036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.945 [2024-07-15 18:42:39.459042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.945 [2024-07-15 18:42:39.459048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.945 [2024-07-15 18:42:39.459063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-07-15 18:42:39.469007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.945 [2024-07-15 18:42:39.469062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.946 [2024-07-15 18:42:39.469077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.946 [2024-07-15 18:42:39.469084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.946 [2024-07-15 18:42:39.469091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.946 [2024-07-15 18:42:39.469105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-07-15 18:42:39.479028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.946 [2024-07-15 18:42:39.479082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.946 [2024-07-15 18:42:39.479097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.946 [2024-07-15 18:42:39.479104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.946 [2024-07-15 18:42:39.479110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.946 [2024-07-15 18:42:39.479125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-07-15 18:42:39.488978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.946 [2024-07-15 18:42:39.489034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.946 [2024-07-15 18:42:39.489049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.946 [2024-07-15 18:42:39.489056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.946 [2024-07-15 18:42:39.489062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.946 [2024-07-15 18:42:39.489077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-07-15 18:42:39.499069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.946 [2024-07-15 18:42:39.499125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.946 [2024-07-15 18:42:39.499140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.946 [2024-07-15 18:42:39.499148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.946 [2024-07-15 18:42:39.499155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:53.946 [2024-07-15 18:42:39.499169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.946 qpair failed and we were unable to recover it. 00:27:54.204 [2024-07-15 18:42:39.509087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.204 [2024-07-15 18:42:39.509143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.204 [2024-07-15 18:42:39.509158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.204 [2024-07-15 18:42:39.509165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.204 [2024-07-15 18:42:39.509171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.204 [2024-07-15 18:42:39.509185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.204 qpair failed and we were unable to recover it. 00:27:54.204 [2024-07-15 18:42:39.519114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.204 [2024-07-15 18:42:39.519170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.204 [2024-07-15 18:42:39.519193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.204 [2024-07-15 18:42:39.519200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.204 [2024-07-15 18:42:39.519210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.204 [2024-07-15 18:42:39.519225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.204 qpair failed and we were unable to recover it. 00:27:54.204 [2024-07-15 18:42:39.529093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.204 [2024-07-15 18:42:39.529148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.204 [2024-07-15 18:42:39.529162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.204 [2024-07-15 18:42:39.529169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.204 [2024-07-15 18:42:39.529175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.205 [2024-07-15 18:42:39.529190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.205 qpair failed and we were unable to recover it. 00:27:54.205 [2024-07-15 18:42:39.539162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.205 [2024-07-15 18:42:39.539215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.205 [2024-07-15 18:42:39.539229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.205 [2024-07-15 18:42:39.539236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.205 [2024-07-15 18:42:39.539243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.205 [2024-07-15 18:42:39.539257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.205 qpair failed and we were unable to recover it. 00:27:54.205 [2024-07-15 18:42:39.549196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.205 [2024-07-15 18:42:39.549254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.205 [2024-07-15 18:42:39.549268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.205 [2024-07-15 18:42:39.549275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.205 [2024-07-15 18:42:39.549281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.205 [2024-07-15 18:42:39.549295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.205 qpair failed and we were unable to recover it. 00:27:54.205 [2024-07-15 18:42:39.559224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.205 [2024-07-15 18:42:39.559280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.205 [2024-07-15 18:42:39.559295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.205 [2024-07-15 18:42:39.559303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.205 [2024-07-15 18:42:39.559309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.205 [2024-07-15 18:42:39.559324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.205 qpair failed and we were unable to recover it. 00:27:54.205 [2024-07-15 18:42:39.569284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.205 [2024-07-15 18:42:39.569345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.205 [2024-07-15 18:42:39.569360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.205 [2024-07-15 18:42:39.569367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.205 [2024-07-15 18:42:39.569374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.205 [2024-07-15 18:42:39.569388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.205 qpair failed and we were unable to recover it. 00:27:54.205 [2024-07-15 18:42:39.579277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.205 [2024-07-15 18:42:39.579332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.205 [2024-07-15 18:42:39.579349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.205 [2024-07-15 18:42:39.579357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.205 [2024-07-15 18:42:39.579363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.205 [2024-07-15 18:42:39.579378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.205 qpair failed and we were unable to recover it. 00:27:54.205 [2024-07-15 18:42:39.589248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.205 [2024-07-15 18:42:39.589314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.205 [2024-07-15 18:42:39.589329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.205 [2024-07-15 18:42:39.589340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.205 [2024-07-15 18:42:39.589346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.205 [2024-07-15 18:42:39.589360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.205 qpair failed and we were unable to recover it. 00:27:54.205 [2024-07-15 18:42:39.599331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.205 [2024-07-15 18:42:39.599386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.205 [2024-07-15 18:42:39.599400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.205 [2024-07-15 18:42:39.599408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.205 [2024-07-15 18:42:39.599414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.205 [2024-07-15 18:42:39.599428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.205 qpair failed and we were unable to recover it. 00:27:54.205 [2024-07-15 18:42:39.609370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.205 [2024-07-15 18:42:39.609423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.205 [2024-07-15 18:42:39.609437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.205 [2024-07-15 18:42:39.609447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.205 [2024-07-15 18:42:39.609454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.205 [2024-07-15 18:42:39.609469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.205 qpair failed and we were unable to recover it. 00:27:54.205 [2024-07-15 18:42:39.619389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.205 [2024-07-15 18:42:39.619444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.205 [2024-07-15 18:42:39.619458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.205 [2024-07-15 18:42:39.619466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.205 [2024-07-15 18:42:39.619473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.205 [2024-07-15 18:42:39.619487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.205 qpair failed and we were unable to recover it. 00:27:54.205 [2024-07-15 18:42:39.629414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.205 [2024-07-15 18:42:39.629466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.205 [2024-07-15 18:42:39.629480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.205 [2024-07-15 18:42:39.629487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.205 [2024-07-15 18:42:39.629494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.205 [2024-07-15 18:42:39.629508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.205 qpair failed and we were unable to recover it. 00:27:54.205 [2024-07-15 18:42:39.639499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.205 [2024-07-15 18:42:39.639559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.205 [2024-07-15 18:42:39.639573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.205 [2024-07-15 18:42:39.639580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.205 [2024-07-15 18:42:39.639587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.205 [2024-07-15 18:42:39.639601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.205 qpair failed and we were unable to recover it. 00:27:54.205 [2024-07-15 18:42:39.649485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.205 [2024-07-15 18:42:39.649579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.205 [2024-07-15 18:42:39.649594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.205 [2024-07-15 18:42:39.649601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.205 [2024-07-15 18:42:39.649608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.205 [2024-07-15 18:42:39.649623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.205 qpair failed and we were unable to recover it. 00:27:54.205 [2024-07-15 18:42:39.659515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.205 [2024-07-15 18:42:39.659576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.205 [2024-07-15 18:42:39.659591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.205 [2024-07-15 18:42:39.659598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.205 [2024-07-15 18:42:39.659604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.205 [2024-07-15 18:42:39.659618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.205 qpair failed and we were unable to recover it. 00:27:54.205 [2024-07-15 18:42:39.669529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.205 [2024-07-15 18:42:39.669579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.205 [2024-07-15 18:42:39.669594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.205 [2024-07-15 18:42:39.669601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.205 [2024-07-15 18:42:39.669607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.205 [2024-07-15 18:42:39.669621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.205 qpair failed and we were unable to recover it. 00:27:54.205 [2024-07-15 18:42:39.679620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.205 [2024-07-15 18:42:39.679677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.206 [2024-07-15 18:42:39.679692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.206 [2024-07-15 18:42:39.679699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.206 [2024-07-15 18:42:39.679705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.206 [2024-07-15 18:42:39.679719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.206 qpair failed and we were unable to recover it. 00:27:54.206 [2024-07-15 18:42:39.689605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.206 [2024-07-15 18:42:39.689661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.206 [2024-07-15 18:42:39.689676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.206 [2024-07-15 18:42:39.689682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.206 [2024-07-15 18:42:39.689688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.206 [2024-07-15 18:42:39.689703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.206 qpair failed and we were unable to recover it. 00:27:54.206 [2024-07-15 18:42:39.699629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.206 [2024-07-15 18:42:39.699695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.206 [2024-07-15 18:42:39.699712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.206 [2024-07-15 18:42:39.699719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.206 [2024-07-15 18:42:39.699725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.206 [2024-07-15 18:42:39.699740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.206 qpair failed and we were unable to recover it. 00:27:54.206 [2024-07-15 18:42:39.709652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.206 [2024-07-15 18:42:39.709704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.206 [2024-07-15 18:42:39.709718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.206 [2024-07-15 18:42:39.709725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.206 [2024-07-15 18:42:39.709731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.206 [2024-07-15 18:42:39.709746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.206 qpair failed and we were unable to recover it. 00:27:54.206 [2024-07-15 18:42:39.719649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.206 [2024-07-15 18:42:39.719698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.206 [2024-07-15 18:42:39.719712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.206 [2024-07-15 18:42:39.719718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.206 [2024-07-15 18:42:39.719726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.206 [2024-07-15 18:42:39.719741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.206 qpair failed and we were unable to recover it. 00:27:54.206 [2024-07-15 18:42:39.729679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.206 [2024-07-15 18:42:39.729774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.206 [2024-07-15 18:42:39.729787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.206 [2024-07-15 18:42:39.729795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.206 [2024-07-15 18:42:39.729800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.206 [2024-07-15 18:42:39.729815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.206 qpair failed and we were unable to recover it. 00:27:54.206 [2024-07-15 18:42:39.739718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.206 [2024-07-15 18:42:39.739774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.206 [2024-07-15 18:42:39.739790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.206 [2024-07-15 18:42:39.739797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.206 [2024-07-15 18:42:39.739803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.206 [2024-07-15 18:42:39.739821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.206 qpair failed and we were unable to recover it. 00:27:54.206 [2024-07-15 18:42:39.749761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.206 [2024-07-15 18:42:39.749816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.206 [2024-07-15 18:42:39.749830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.206 [2024-07-15 18:42:39.749838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.206 [2024-07-15 18:42:39.749845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.206 [2024-07-15 18:42:39.749859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.206 qpair failed and we were unable to recover it. 00:27:54.206 [2024-07-15 18:42:39.759751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.206 [2024-07-15 18:42:39.759807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.206 [2024-07-15 18:42:39.759822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.206 [2024-07-15 18:42:39.759829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.206 [2024-07-15 18:42:39.759836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.206 [2024-07-15 18:42:39.759850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.206 qpair failed and we were unable to recover it. 00:27:54.464 [2024-07-15 18:42:39.769823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.464 [2024-07-15 18:42:39.769876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.464 [2024-07-15 18:42:39.769891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.464 [2024-07-15 18:42:39.769898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.464 [2024-07-15 18:42:39.769904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.464 [2024-07-15 18:42:39.769919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.464 qpair failed and we were unable to recover it. 00:27:54.464 [2024-07-15 18:42:39.779853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.464 [2024-07-15 18:42:39.779907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.464 [2024-07-15 18:42:39.779922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.464 [2024-07-15 18:42:39.779929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.464 [2024-07-15 18:42:39.779935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.464 [2024-07-15 18:42:39.779949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.464 qpair failed and we were unable to recover it. 00:27:54.464 [2024-07-15 18:42:39.789878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.464 [2024-07-15 18:42:39.789932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.464 [2024-07-15 18:42:39.789949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.464 [2024-07-15 18:42:39.789956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.464 [2024-07-15 18:42:39.789962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.465 [2024-07-15 18:42:39.789976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.465 qpair failed and we were unable to recover it. 00:27:54.465 [2024-07-15 18:42:39.799900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.465 [2024-07-15 18:42:39.799953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.465 [2024-07-15 18:42:39.799968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.465 [2024-07-15 18:42:39.799975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.465 [2024-07-15 18:42:39.799981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.465 [2024-07-15 18:42:39.799995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.465 qpair failed and we were unable to recover it. 00:27:54.465 [2024-07-15 18:42:39.809961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.465 [2024-07-15 18:42:39.810019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.465 [2024-07-15 18:42:39.810033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.465 [2024-07-15 18:42:39.810040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.465 [2024-07-15 18:42:39.810046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.465 [2024-07-15 18:42:39.810060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.465 qpair failed and we were unable to recover it. 00:27:54.465 [2024-07-15 18:42:39.819974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.465 [2024-07-15 18:42:39.820033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.465 [2024-07-15 18:42:39.820047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.465 [2024-07-15 18:42:39.820054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.465 [2024-07-15 18:42:39.820061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.465 [2024-07-15 18:42:39.820074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.465 qpair failed and we were unable to recover it. 00:27:54.465 [2024-07-15 18:42:39.830007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.465 [2024-07-15 18:42:39.830071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.465 [2024-07-15 18:42:39.830087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.465 [2024-07-15 18:42:39.830094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.465 [2024-07-15 18:42:39.830105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.465 [2024-07-15 18:42:39.830120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.465 qpair failed and we were unable to recover it. 00:27:54.465 [2024-07-15 18:42:39.840014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.465 [2024-07-15 18:42:39.840069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.465 [2024-07-15 18:42:39.840083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.465 [2024-07-15 18:42:39.840091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.465 [2024-07-15 18:42:39.840097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.465 [2024-07-15 18:42:39.840113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.465 qpair failed and we were unable to recover it. 00:27:54.465 [2024-07-15 18:42:39.850040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.465 [2024-07-15 18:42:39.850094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.465 [2024-07-15 18:42:39.850109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.465 [2024-07-15 18:42:39.850116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.465 [2024-07-15 18:42:39.850122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.465 [2024-07-15 18:42:39.850137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.465 qpair failed and we were unable to recover it. 00:27:54.465 [2024-07-15 18:42:39.860065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.465 [2024-07-15 18:42:39.860121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.465 [2024-07-15 18:42:39.860136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.465 [2024-07-15 18:42:39.860143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.465 [2024-07-15 18:42:39.860150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb810000b90 00:27:54.465 [2024-07-15 18:42:39.860164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:54.465 qpair failed and we were unable to recover it. 00:27:54.465 [2024-07-15 18:42:39.870116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.465 [2024-07-15 18:42:39.870209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.465 [2024-07-15 18:42:39.870263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.465 [2024-07-15 18:42:39.870287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.465 [2024-07-15 18:42:39.870307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb800000b90 00:27:54.465 [2024-07-15 18:42:39.870367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:54.465 qpair failed and we were unable to recover it. 00:27:54.465 [2024-07-15 18:42:39.880134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.465 [2024-07-15 18:42:39.880222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.465 [2024-07-15 18:42:39.880255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.465 [2024-07-15 18:42:39.880271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.465 [2024-07-15 18:42:39.880286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb800000b90 00:27:54.465 [2024-07-15 18:42:39.880320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:54.465 qpair failed and we were unable to recover it. 00:27:54.465 [2024-07-15 18:42:39.890207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.465 [2024-07-15 18:42:39.890275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.465 [2024-07-15 18:42:39.890297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.465 [2024-07-15 18:42:39.890309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.465 [2024-07-15 18:42:39.890318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb800000b90 00:27:54.465 [2024-07-15 18:42:39.890344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:54.465 qpair failed and we were unable to recover it. 00:27:54.465 [2024-07-15 18:42:39.900215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.465 [2024-07-15 18:42:39.900319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.465 [2024-07-15 18:42:39.900382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.465 [2024-07-15 18:42:39.900407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.465 [2024-07-15 18:42:39.900426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb808000b90 00:27:54.465 [2024-07-15 18:42:39.900475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:54.465 qpair failed and we were unable to recover it. 00:27:54.465 [2024-07-15 18:42:39.910212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:54.465 [2024-07-15 18:42:39.910311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:54.465 [2024-07-15 18:42:39.910346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:54.465 [2024-07-15 18:42:39.910361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.465 [2024-07-15 18:42:39.910375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb808000b90 00:27:54.465 [2024-07-15 18:42:39.910405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:54.465 qpair failed and we were unable to recover it. 00:27:54.465 [2024-07-15 18:42:39.910513] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:27:54.465 A controller has encountered a failure and is being reset. 00:27:54.465 [2024-07-15 18:42:39.910599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x510000 (9): Bad file descriptor 00:27:54.465 Controller properly reset. 00:27:54.465 Initializing NVMe Controllers 00:27:54.465 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:54.465 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:54.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:54.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:54.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:54.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:54.465 Initialization complete. Launching workers. 00:27:54.465 Starting thread on core 1 00:27:54.465 Starting thread on core 2 00:27:54.465 Starting thread on core 3 00:27:54.465 Starting thread on core 0 00:27:54.465 18:42:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:54.465 00:27:54.465 real 0m11.301s 00:27:54.465 user 0m21.581s 00:27:54.465 sys 0m4.614s 00:27:54.465 18:42:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:54.465 18:42:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:54.465 ************************************ 00:27:54.465 END TEST nvmf_target_disconnect_tc2 00:27:54.465 ************************************ 00:27:54.465 18:42:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:27:54.466 18:42:39 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:54.466 18:42:39 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:54.466 18:42:39 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:54.466 18:42:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:54.466 18:42:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:27:54.466 18:42:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:54.466 18:42:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:27:54.466 18:42:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:54.466 18:42:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:54.466 rmmod nvme_tcp 00:27:54.466 rmmod nvme_fabrics 00:27:54.723 rmmod nvme_keyring 00:27:54.723 18:42:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:54.723 18:42:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:27:54.723 18:42:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:27:54.723 18:42:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 4076940 ']' 00:27:54.723 18:42:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 4076940 00:27:54.723 18:42:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 4076940 ']' 00:27:54.723 18:42:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 4076940 00:27:54.723 18:42:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:27:54.723 18:42:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:54.723 18:42:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4076940 00:27:54.723 18:42:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:27:54.723 18:42:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:27:54.723 18:42:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4076940' 00:27:54.723 killing process with pid 4076940 00:27:54.723 18:42:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 4076940 00:27:54.723 18:42:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 4076940 00:27:54.979 18:42:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:54.979 18:42:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:54.979 18:42:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:54.979 18:42:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:54.979 18:42:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:54.979 18:42:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.979 18:42:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:54.979 18:42:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.881 18:42:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:56.881 00:27:56.881 real 0m19.776s 00:27:56.881 user 0m48.721s 00:27:56.881 sys 0m9.363s 00:27:56.881 18:42:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:56.881 18:42:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:56.881 ************************************ 00:27:56.881 END TEST nvmf_target_disconnect 00:27:56.881 ************************************ 00:27:56.881 18:42:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:56.881 18:42:42 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:27:56.881 18:42:42 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:56.881 18:42:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:57.140 18:42:42 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:27:57.140 00:27:57.140 real 21m21.920s 00:27:57.140 user 45m16.828s 00:27:57.140 sys 6m35.468s 00:27:57.140 18:42:42 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:57.140 18:42:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:57.140 ************************************ 00:27:57.140 END TEST nvmf_tcp 00:27:57.140 ************************************ 00:27:57.140 18:42:42 -- common/autotest_common.sh@1142 -- # return 0 00:27:57.140 18:42:42 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:27:57.140 18:42:42 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:57.140 18:42:42 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:57.140 18:42:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:57.140 18:42:42 -- common/autotest_common.sh@10 -- # set +x 00:27:57.140 ************************************ 00:27:57.140 START TEST spdkcli_nvmf_tcp 00:27:57.140 ************************************ 00:27:57.140 18:42:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:57.140 * Looking for test storage... 00:27:57.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:27:57.140 18:42:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:27:57.140 18:42:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:27:57.140 18:42:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:27:57.140 18:42:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:57.140 18:42:42 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:27:57.140 18:42:42 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:57.140 18:42:42 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:57.140 18:42:42 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:57.140 18:42:42 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:57.140 18:42:42 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:57.140 18:42:42 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:57.140 18:42:42 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:57.140 18:42:42 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:57.140 18:42:42 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:57.140 18:42:42 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:57.140 18:42:42 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:57.140 18:42:42 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:57.140 18:42:42 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:57.140 18:42:42 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:57.140 18:42:42 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:57.140 18:42:42 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:57.140 18:42:42 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:57.140 18:42:42 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:57.141 18:42:42 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:57.141 18:42:42 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:57.141 18:42:42 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.141 18:42:42 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.141 18:42:42 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.141 18:42:42 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:27:57.141 18:42:42 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.141 18:42:42 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:27:57.141 18:42:42 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:57.141 18:42:42 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:57.141 18:42:42 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:57.141 18:42:42 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:57.141 18:42:42 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:57.141 18:42:42 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:57.141 18:42:42 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:57.141 18:42:42 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:57.141 18:42:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:57.141 18:42:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:57.141 18:42:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:57.141 18:42:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:57.141 18:42:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:57.141 18:42:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:57.141 18:42:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:57.141 18:42:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=4078531 00:27:57.141 18:42:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 4078531 00:27:57.141 18:42:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 4078531 ']' 00:27:57.141 18:42:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:57.141 18:42:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:57.141 18:42:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:57.141 18:42:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:57.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:57.141 18:42:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:57.141 18:42:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:57.141 [2024-07-15 18:42:42.681628] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:27:57.141 [2024-07-15 18:42:42.681680] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4078531 ] 00:27:57.399 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.399 [2024-07-15 18:42:42.746843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:57.399 [2024-07-15 18:42:42.825992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:57.399 [2024-07-15 18:42:42.825993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.964 18:42:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:57.964 18:42:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:27:57.964 18:42:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:57.964 18:42:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:57.964 18:42:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:58.221 18:42:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:58.221 18:42:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:27:58.221 18:42:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:27:58.221 18:42:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:58.221 18:42:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:58.221 18:42:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:58.221 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:58.221 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:58.221 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:27:58.221 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:27:58.221 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:27:58.221 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:27:58.221 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:58.221 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:27:58.221 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:27:58.221 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:58.221 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:58.221 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:27:58.221 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:58.221 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:58.221 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:27:58.221 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:58.221 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:58.221 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:58.221 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:58.221 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:27:58.221 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:27:58.221 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:58.221 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:27:58.221 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:58.221 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:27:58.221 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:27:58.221 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:27:58.221 ' 00:28:00.748 [2024-07-15 18:42:46.100923] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:02.122 [2024-07-15 18:42:47.381132] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:28:04.647 [2024-07-15 18:42:49.760399] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:28:06.542 [2024-07-15 18:42:51.818952] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:28:07.915 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:28:07.915 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:28:07.915 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:28:07.915 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:28:07.915 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:28:07.915 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:28:07.915 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:28:07.915 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:07.915 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:28:07.915 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:28:07.915 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:07.915 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:07.915 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:28:07.915 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:07.915 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:07.915 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:28:07.915 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:07.915 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:28:07.915 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:07.915 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:07.915 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:28:07.915 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:28:07.915 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:28:07.915 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:28:07.915 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:07.915 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:28:07.915 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:28:07.915 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:28:08.173 18:42:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:28:08.173 18:42:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:08.173 18:42:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:08.173 18:42:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:28:08.173 18:42:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:08.173 18:42:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:08.173 18:42:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:28:08.173 18:42:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:28:08.431 18:42:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:28:08.431 18:42:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:28:08.431 18:42:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:28:08.431 18:42:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:08.431 18:42:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:08.431 18:42:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:28:08.431 18:42:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:08.431 18:42:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:08.431 18:42:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:28:08.431 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:28:08.431 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:08.431 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:28:08.431 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:28:08.431 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:28:08.431 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:28:08.431 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:08.431 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:28:08.431 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:28:08.431 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:28:08.431 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:28:08.431 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:28:08.431 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:28:08.431 ' 00:28:13.693 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:28:13.693 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:28:13.693 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:13.693 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:28:13.693 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:28:13.693 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:28:13.693 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:28:13.693 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:13.693 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:28:13.693 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:28:13.693 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:28:13.693 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:28:13.693 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:28:13.693 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:28:13.950 18:42:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:28:13.950 18:42:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:13.950 18:42:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:13.950 18:42:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 4078531 00:28:13.950 18:42:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 4078531 ']' 00:28:13.950 18:42:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 4078531 00:28:13.950 18:42:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:28:13.950 18:42:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:13.950 18:42:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4078531 00:28:13.950 18:42:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:13.950 18:42:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:13.950 18:42:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4078531' 00:28:13.950 killing process with pid 4078531 00:28:13.950 18:42:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 4078531 00:28:13.950 18:42:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 4078531 00:28:14.208 18:42:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:28:14.208 18:42:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:28:14.208 18:42:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 4078531 ']' 00:28:14.208 18:42:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 4078531 00:28:14.208 18:42:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 4078531 ']' 00:28:14.208 18:42:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 4078531 00:28:14.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (4078531) - No such process 00:28:14.208 18:42:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 4078531 is not found' 00:28:14.208 Process with pid 4078531 is not found 00:28:14.208 18:42:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:28:14.208 18:42:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:28:14.208 18:42:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:28:14.208 00:28:14.208 real 0m17.100s 00:28:14.208 user 0m37.205s 00:28:14.208 sys 0m0.863s 00:28:14.208 18:42:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:14.208 18:42:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:14.208 ************************************ 00:28:14.208 END TEST spdkcli_nvmf_tcp 00:28:14.208 ************************************ 00:28:14.208 18:42:59 -- common/autotest_common.sh@1142 -- # return 0 00:28:14.208 18:42:59 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:28:14.208 18:42:59 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:14.208 18:42:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:14.208 18:42:59 -- common/autotest_common.sh@10 -- # set +x 00:28:14.208 ************************************ 00:28:14.208 START TEST nvmf_identify_passthru 00:28:14.208 ************************************ 00:28:14.208 18:42:59 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:28:14.208 * Looking for test storage... 00:28:14.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:14.467 18:42:59 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:14.467 18:42:59 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:14.467 18:42:59 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:14.467 18:42:59 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:14.467 18:42:59 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.467 18:42:59 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.467 18:42:59 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.467 18:42:59 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:28:14.467 18:42:59 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:14.467 18:42:59 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:14.467 18:42:59 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:14.467 18:42:59 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:14.467 18:42:59 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:14.467 18:42:59 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.467 18:42:59 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.467 18:42:59 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.467 18:42:59 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:28:14.467 18:42:59 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.467 18:42:59 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.467 18:42:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:14.467 18:42:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:14.467 18:42:59 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:28:14.467 18:42:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:19.730 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:19.730 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:19.730 Found net devices under 0000:86:00.0: cvl_0_0 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:19.730 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.731 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:19.731 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.731 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:19.731 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:19.731 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.731 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:19.731 Found net devices under 0000:86:00.1: cvl_0_1 00:28:19.731 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.731 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:19.731 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:28:19.731 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:19.731 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:19.731 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:19.731 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:19.731 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:19.731 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:19.731 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:19.731 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:19.731 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:19.731 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:19.731 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:19.731 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:19.731 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:19.731 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:19.731 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:19.731 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:19.989 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:19.989 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:19.989 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:19.989 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:19.989 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:19.989 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:19.989 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:19.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:19.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:28:19.989 00:28:19.989 --- 10.0.0.2 ping statistics --- 00:28:19.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.989 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:28:19.989 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:19.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:19.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:28:19.989 00:28:19.989 --- 10.0.0.1 ping statistics --- 00:28:19.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.989 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:28:19.989 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:19.989 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:28:19.989 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:19.989 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:19.989 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:19.989 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:19.989 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:19.989 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:19.990 18:43:05 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:19.990 18:43:05 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:28:19.990 18:43:05 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:19.990 18:43:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:19.990 18:43:05 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:28:19.990 18:43:05 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:28:19.990 18:43:05 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:28:19.990 18:43:05 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:28:19.990 18:43:05 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:28:19.990 18:43:05 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:28:19.990 18:43:05 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:28:19.990 18:43:05 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:19.990 18:43:05 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:19.990 18:43:05 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:28:20.248 18:43:05 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:28:20.248 18:43:05 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:28:20.248 18:43:05 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:5e:00.0 00:28:20.248 18:43:05 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:28:20.248 18:43:05 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:28:20.248 18:43:05 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:28:20.248 18:43:05 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:28:20.248 18:43:05 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:28:20.248 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.563 18:43:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLN951000C61P6AGN 00:28:25.563 18:43:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:28:25.563 18:43:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:28:25.563 18:43:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:28:25.563 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.747 18:43:14 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:28:29.747 18:43:14 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:28:29.747 18:43:14 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:29.747 18:43:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:29.747 18:43:15 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:28:29.747 18:43:15 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:29.747 18:43:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:29.747 18:43:15 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=4085790 00:28:29.747 18:43:15 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:29.747 18:43:15 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:29.747 18:43:15 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 4085790 00:28:29.747 18:43:15 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 4085790 ']' 00:28:29.747 18:43:15 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:29.747 18:43:15 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:29.747 18:43:15 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:29.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:29.747 18:43:15 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:29.747 18:43:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:29.747 [2024-07-15 18:43:15.084255] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:28:29.747 [2024-07-15 18:43:15.084299] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:29.747 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.747 [2024-07-15 18:43:15.151662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:29.747 [2024-07-15 18:43:15.230193] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:29.747 [2024-07-15 18:43:15.230231] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:29.747 [2024-07-15 18:43:15.230238] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:29.747 [2024-07-15 18:43:15.230243] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:29.747 [2024-07-15 18:43:15.230248] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:29.747 [2024-07-15 18:43:15.230306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.747 [2024-07-15 18:43:15.230871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:29.747 [2024-07-15 18:43:15.230898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.747 [2024-07-15 18:43:15.230900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:30.683 18:43:15 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:30.683 18:43:15 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:28:30.683 18:43:15 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:28:30.683 18:43:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.683 18:43:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:30.683 INFO: Log level set to 20 00:28:30.683 INFO: Requests: 00:28:30.683 { 00:28:30.683 "jsonrpc": "2.0", 00:28:30.683 "method": "nvmf_set_config", 00:28:30.683 "id": 1, 00:28:30.683 "params": { 00:28:30.683 "admin_cmd_passthru": { 00:28:30.683 "identify_ctrlr": true 00:28:30.683 } 00:28:30.683 } 00:28:30.683 } 00:28:30.683 00:28:30.683 INFO: response: 00:28:30.683 { 00:28:30.683 "jsonrpc": "2.0", 00:28:30.683 "id": 1, 00:28:30.683 "result": true 00:28:30.683 } 00:28:30.683 00:28:30.683 18:43:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.683 18:43:15 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:28:30.683 18:43:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.683 18:43:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:30.683 INFO: Setting log level to 20 00:28:30.683 INFO: Setting log level to 20 00:28:30.683 INFO: Log level set to 20 00:28:30.683 INFO: Log level set to 20 00:28:30.683 INFO: Requests: 00:28:30.683 { 00:28:30.683 "jsonrpc": "2.0", 00:28:30.683 "method": "framework_start_init", 00:28:30.683 "id": 1 00:28:30.683 } 00:28:30.683 00:28:30.683 INFO: Requests: 00:28:30.683 { 00:28:30.683 "jsonrpc": "2.0", 00:28:30.683 "method": "framework_start_init", 00:28:30.683 "id": 1 00:28:30.683 } 00:28:30.683 00:28:30.683 [2024-07-15 18:43:15.974826] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:28:30.683 INFO: response: 00:28:30.683 { 00:28:30.683 "jsonrpc": "2.0", 00:28:30.683 "id": 1, 00:28:30.683 "result": true 00:28:30.683 } 00:28:30.683 00:28:30.683 INFO: response: 00:28:30.683 { 00:28:30.683 "jsonrpc": "2.0", 00:28:30.683 "id": 1, 00:28:30.683 "result": true 00:28:30.683 } 00:28:30.683 00:28:30.683 18:43:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.683 18:43:15 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:30.683 18:43:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.683 18:43:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:30.683 INFO: Setting log level to 40 00:28:30.683 INFO: Setting log level to 40 00:28:30.683 INFO: Setting log level to 40 00:28:30.683 [2024-07-15 18:43:15.988299] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:30.683 18:43:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.684 18:43:15 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:28:30.684 18:43:15 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:30.684 18:43:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:30.684 18:43:16 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:28:30.684 18:43:16 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.684 18:43:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:33.971 Nvme0n1 00:28:33.971 18:43:18 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.971 18:43:18 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:28:33.971 18:43:18 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.971 18:43:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:33.971 18:43:18 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.971 18:43:18 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:33.971 18:43:18 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.971 18:43:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:33.971 18:43:18 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.971 18:43:18 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:33.971 18:43:18 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.971 18:43:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:33.971 [2024-07-15 18:43:18.887726] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:33.971 18:43:18 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.971 18:43:18 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:28:33.971 18:43:18 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.971 18:43:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:33.971 [ 00:28:33.971 { 00:28:33.971 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:33.971 "subtype": "Discovery", 00:28:33.971 "listen_addresses": [], 00:28:33.971 "allow_any_host": true, 00:28:33.971 "hosts": [] 00:28:33.971 }, 00:28:33.971 { 00:28:33.971 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:33.971 "subtype": "NVMe", 00:28:33.971 "listen_addresses": [ 00:28:33.971 { 00:28:33.971 "trtype": "TCP", 00:28:33.971 "adrfam": "IPv4", 00:28:33.971 "traddr": "10.0.0.2", 00:28:33.971 "trsvcid": "4420" 00:28:33.971 } 00:28:33.971 ], 00:28:33.971 "allow_any_host": true, 00:28:33.971 "hosts": [], 00:28:33.971 "serial_number": "SPDK00000000000001", 00:28:33.971 "model_number": "SPDK bdev Controller", 00:28:33.971 "max_namespaces": 1, 00:28:33.971 "min_cntlid": 1, 00:28:33.971 "max_cntlid": 65519, 00:28:33.971 "namespaces": [ 00:28:33.971 { 00:28:33.971 "nsid": 1, 00:28:33.971 "bdev_name": "Nvme0n1", 00:28:33.971 "name": "Nvme0n1", 00:28:33.971 "nguid": "A03A6248CF254114B582E05237A8EC7B", 00:28:33.971 "uuid": "a03a6248-cf25-4114-b582-e05237a8ec7b" 00:28:33.971 } 00:28:33.971 ] 00:28:33.971 } 00:28:33.971 ] 00:28:33.971 18:43:18 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.971 18:43:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:33.971 18:43:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:28:33.971 18:43:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:28:33.971 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.971 18:43:19 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLN951000C61P6AGN 00:28:33.971 18:43:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:33.971 18:43:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:28:33.971 18:43:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:28:33.971 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.971 18:43:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:28:33.971 18:43:19 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLN951000C61P6AGN '!=' PHLN951000C61P6AGN ']' 00:28:33.971 18:43:19 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:28:33.971 18:43:19 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:33.971 18:43:19 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.971 18:43:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:33.971 18:43:19 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.971 18:43:19 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:28:33.971 18:43:19 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:28:33.972 18:43:19 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:33.972 18:43:19 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:28:33.972 18:43:19 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:33.972 18:43:19 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:28:33.972 18:43:19 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:33.972 18:43:19 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:33.972 rmmod nvme_tcp 00:28:33.972 rmmod nvme_fabrics 00:28:33.972 rmmod nvme_keyring 00:28:33.972 18:43:19 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:33.972 18:43:19 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:28:33.972 18:43:19 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:28:33.972 18:43:19 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 4085790 ']' 00:28:33.972 18:43:19 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 4085790 00:28:33.972 18:43:19 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 4085790 ']' 00:28:33.972 18:43:19 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 4085790 00:28:33.972 18:43:19 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:28:33.972 18:43:19 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:33.972 18:43:19 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4085790 00:28:33.972 18:43:19 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:33.972 18:43:19 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:33.972 18:43:19 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4085790' 00:28:33.972 killing process with pid 4085790 00:28:33.972 18:43:19 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 4085790 00:28:33.972 18:43:19 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 4085790 00:28:36.503 18:43:21 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:36.503 18:43:21 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:36.503 18:43:21 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:36.503 18:43:21 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:36.503 18:43:21 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:36.503 18:43:21 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.503 18:43:21 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:36.503 18:43:21 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.410 18:43:23 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:38.410 00:28:38.410 real 0m23.821s 00:28:38.410 user 0m33.408s 00:28:38.410 sys 0m5.118s 00:28:38.410 18:43:23 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:38.410 18:43:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:38.410 ************************************ 00:28:38.410 END TEST nvmf_identify_passthru 00:28:38.410 ************************************ 00:28:38.410 18:43:23 -- common/autotest_common.sh@1142 -- # return 0 00:28:38.410 18:43:23 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:38.410 18:43:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:38.410 18:43:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:38.410 18:43:23 -- common/autotest_common.sh@10 -- # set +x 00:28:38.410 ************************************ 00:28:38.410 START TEST nvmf_dif 00:28:38.410 ************************************ 00:28:38.410 18:43:23 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:38.410 * Looking for test storage... 00:28:38.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:38.410 18:43:23 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:38.410 18:43:23 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:28:38.410 18:43:23 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:38.410 18:43:23 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:38.410 18:43:23 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:38.410 18:43:23 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:38.410 18:43:23 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:38.410 18:43:23 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:38.410 18:43:23 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:38.410 18:43:23 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:38.410 18:43:23 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:38.410 18:43:23 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:38.410 18:43:23 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:38.410 18:43:23 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:38.410 18:43:23 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:38.410 18:43:23 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:38.411 18:43:23 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:38.411 18:43:23 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:38.411 18:43:23 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:38.411 18:43:23 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:38.411 18:43:23 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:38.411 18:43:23 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:38.411 18:43:23 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.411 18:43:23 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.411 18:43:23 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.411 18:43:23 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:28:38.411 18:43:23 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.411 18:43:23 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:28:38.411 18:43:23 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:38.411 18:43:23 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:38.411 18:43:23 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:38.411 18:43:23 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:38.411 18:43:23 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:38.411 18:43:23 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:38.411 18:43:23 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:38.411 18:43:23 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:38.411 18:43:23 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:28:38.411 18:43:23 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:28:38.411 18:43:23 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:28:38.411 18:43:23 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:28:38.411 18:43:23 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:28:38.411 18:43:23 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:38.411 18:43:23 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:38.411 18:43:23 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:38.411 18:43:23 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:38.411 18:43:23 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:38.411 18:43:23 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.411 18:43:23 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:38.411 18:43:23 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.411 18:43:23 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:38.411 18:43:23 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:38.411 18:43:23 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:28:38.411 18:43:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:43.682 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:43.682 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:43.682 Found net devices under 0000:86:00.0: cvl_0_0 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:43.682 Found net devices under 0000:86:00.1: cvl_0_1 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:43.682 18:43:29 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:43.683 18:43:29 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:43.941 18:43:29 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:43.941 18:43:29 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:43.941 18:43:29 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:43.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:43.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:28:43.941 00:28:43.941 --- 10.0.0.2 ping statistics --- 00:28:43.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.941 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:28:43.941 18:43:29 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:43.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:43.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:28:43.941 00:28:43.941 --- 10.0.0.1 ping statistics --- 00:28:43.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.941 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:28:43.941 18:43:29 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:43.941 18:43:29 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:28:43.941 18:43:29 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:43.941 18:43:29 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:46.477 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:28:46.477 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:46.477 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:28:46.477 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:28:46.477 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:28:46.477 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:28:46.477 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:28:46.477 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:28:46.477 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:28:46.477 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:28:46.477 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:28:46.477 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:28:46.477 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:28:46.477 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:28:46.477 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:28:46.477 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:28:46.477 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:28:46.736 18:43:32 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:46.736 18:43:32 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:46.736 18:43:32 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:46.736 18:43:32 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:46.736 18:43:32 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:46.736 18:43:32 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:46.736 18:43:32 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:28:46.736 18:43:32 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:28:46.736 18:43:32 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:46.736 18:43:32 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:46.736 18:43:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:46.736 18:43:32 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=4091471 00:28:46.736 18:43:32 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 4091471 00:28:46.736 18:43:32 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:28:46.736 18:43:32 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 4091471 ']' 00:28:46.736 18:43:32 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:46.736 18:43:32 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:46.736 18:43:32 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:46.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:46.736 18:43:32 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:46.736 18:43:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:46.736 [2024-07-15 18:43:32.224179] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:28:46.736 [2024-07-15 18:43:32.224220] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:46.736 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.736 [2024-07-15 18:43:32.292757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.995 [2024-07-15 18:43:32.369929] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:46.995 [2024-07-15 18:43:32.369960] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:46.995 [2024-07-15 18:43:32.369967] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:46.995 [2024-07-15 18:43:32.369972] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:46.995 [2024-07-15 18:43:32.369977] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:46.995 [2024-07-15 18:43:32.369995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.563 18:43:33 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:47.563 18:43:33 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:28:47.563 18:43:33 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:47.563 18:43:33 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:47.563 18:43:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:47.563 18:43:33 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:47.563 18:43:33 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:28:47.563 18:43:33 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:28:47.563 18:43:33 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.563 18:43:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:47.563 [2024-07-15 18:43:33.059502] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:47.563 18:43:33 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.563 18:43:33 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:28:47.563 18:43:33 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:47.563 18:43:33 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:47.563 18:43:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:47.563 ************************************ 00:28:47.563 START TEST fio_dif_1_default 00:28:47.563 ************************************ 00:28:47.563 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:28:47.563 18:43:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:28:47.564 18:43:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:28:47.564 18:43:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:28:47.564 18:43:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:28:47.564 18:43:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:28:47.564 18:43:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:47.564 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.564 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:47.564 bdev_null0 00:28:47.564 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.564 18:43:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:47.564 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.564 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:47.564 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.564 18:43:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:47.564 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.564 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:47.822 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.822 18:43:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:47.822 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.822 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:47.822 [2024-07-15 18:43:33.127772] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:47.822 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.822 18:43:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:28:47.822 18:43:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:28:47.822 18:43:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:47.822 18:43:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:28:47.822 18:43:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:47.822 18:43:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:28:47.822 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:47.822 18:43:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:47.822 18:43:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:28:47.822 18:43:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:47.822 { 00:28:47.822 "params": { 00:28:47.822 "name": "Nvme$subsystem", 00:28:47.822 "trtype": "$TEST_TRANSPORT", 00:28:47.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.822 "adrfam": "ipv4", 00:28:47.822 "trsvcid": "$NVMF_PORT", 00:28:47.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.822 "hdgst": ${hdgst:-false}, 00:28:47.822 "ddgst": ${ddgst:-false} 00:28:47.822 }, 00:28:47.822 "method": "bdev_nvme_attach_controller" 00:28:47.822 } 00:28:47.822 EOF 00:28:47.822 )") 00:28:47.822 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:47.822 18:43:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:28:47.822 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:47.823 18:43:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:28:47.823 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:47.823 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:47.823 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:28:47.823 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:47.823 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:47.823 18:43:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:28:47.823 18:43:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:28:47.823 18:43:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:28:47.823 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:47.823 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:28:47.823 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:47.823 18:43:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:28:47.823 18:43:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:28:47.823 18:43:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:47.823 "params": { 00:28:47.823 "name": "Nvme0", 00:28:47.823 "trtype": "tcp", 00:28:47.823 "traddr": "10.0.0.2", 00:28:47.823 "adrfam": "ipv4", 00:28:47.823 "trsvcid": "4420", 00:28:47.823 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:47.823 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:47.823 "hdgst": false, 00:28:47.823 "ddgst": false 00:28:47.823 }, 00:28:47.823 "method": "bdev_nvme_attach_controller" 00:28:47.823 }' 00:28:47.823 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:47.823 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:47.823 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:47.823 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:47.823 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:47.823 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:47.823 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:47.823 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:47.823 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:47.823 18:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:48.081 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:48.081 fio-3.35 00:28:48.081 Starting 1 thread 00:28:48.081 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.272 00:29:00.272 filename0: (groupid=0, jobs=1): err= 0: pid=4091856: Mon Jul 15 18:43:44 2024 00:29:00.272 read: IOPS=99, BW=396KiB/s (406kB/s)(3968KiB/10015msec) 00:29:00.272 slat (nsec): min=5730, max=25109, avg=6155.21, stdev=1508.06 00:29:00.272 clat (usec): min=395, max=44029, avg=40362.22, stdev=5118.31 00:29:00.272 lat (usec): min=401, max=44054, avg=40368.38, stdev=5118.33 00:29:00.272 clat percentiles (usec): 00:29:00.272 | 1.00th=[ 465], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:29:00.272 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:00.272 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:00.272 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:29:00.272 | 99.99th=[43779] 00:29:00.272 bw ( KiB/s): min= 384, max= 448, per=99.70%, avg=395.20, stdev=21.47, samples=20 00:29:00.272 iops : min= 96, max= 112, avg=98.80, stdev= 5.37, samples=20 00:29:00.272 lat (usec) : 500=1.61% 00:29:00.272 lat (msec) : 50=98.39% 00:29:00.272 cpu : usr=95.03%, sys=4.72%, ctx=14, majf=0, minf=213 00:29:00.272 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:00.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:00.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:00.272 issued rwts: total=992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:00.272 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:00.272 00:29:00.272 Run status group 0 (all jobs): 00:29:00.272 READ: bw=396KiB/s (406kB/s), 396KiB/s-396KiB/s (406kB/s-406kB/s), io=3968KiB (4063kB), run=10015-10015msec 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.272 00:29:00.272 real 0m11.227s 00:29:00.272 user 0m16.559s 00:29:00.272 sys 0m0.782s 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:00.272 ************************************ 00:29:00.272 END TEST fio_dif_1_default 00:29:00.272 ************************************ 00:29:00.272 18:43:44 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:00.272 18:43:44 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:29:00.272 18:43:44 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:00.272 18:43:44 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:00.272 18:43:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:00.272 ************************************ 00:29:00.272 START TEST fio_dif_1_multi_subsystems 00:29:00.272 ************************************ 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:00.272 bdev_null0 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:00.272 [2024-07-15 18:43:44.431637] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:00.272 bdev_null1 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.272 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:00.273 { 00:29:00.273 "params": { 00:29:00.273 "name": "Nvme$subsystem", 00:29:00.273 "trtype": "$TEST_TRANSPORT", 00:29:00.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.273 "adrfam": "ipv4", 00:29:00.273 "trsvcid": "$NVMF_PORT", 00:29:00.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.273 "hdgst": ${hdgst:-false}, 00:29:00.273 "ddgst": ${ddgst:-false} 00:29:00.273 }, 00:29:00.273 "method": "bdev_nvme_attach_controller" 00:29:00.273 } 00:29:00.273 EOF 00:29:00.273 )") 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:00.273 { 00:29:00.273 "params": { 00:29:00.273 "name": "Nvme$subsystem", 00:29:00.273 "trtype": "$TEST_TRANSPORT", 00:29:00.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.273 "adrfam": "ipv4", 00:29:00.273 "trsvcid": "$NVMF_PORT", 00:29:00.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.273 "hdgst": ${hdgst:-false}, 00:29:00.273 "ddgst": ${ddgst:-false} 00:29:00.273 }, 00:29:00.273 "method": "bdev_nvme_attach_controller" 00:29:00.273 } 00:29:00.273 EOF 00:29:00.273 )") 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:00.273 "params": { 00:29:00.273 "name": "Nvme0", 00:29:00.273 "trtype": "tcp", 00:29:00.273 "traddr": "10.0.0.2", 00:29:00.273 "adrfam": "ipv4", 00:29:00.273 "trsvcid": "4420", 00:29:00.273 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:00.273 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:00.273 "hdgst": false, 00:29:00.273 "ddgst": false 00:29:00.273 }, 00:29:00.273 "method": "bdev_nvme_attach_controller" 00:29:00.273 },{ 00:29:00.273 "params": { 00:29:00.273 "name": "Nvme1", 00:29:00.273 "trtype": "tcp", 00:29:00.273 "traddr": "10.0.0.2", 00:29:00.273 "adrfam": "ipv4", 00:29:00.273 "trsvcid": "4420", 00:29:00.273 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:00.273 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:00.273 "hdgst": false, 00:29:00.273 "ddgst": false 00:29:00.273 }, 00:29:00.273 "method": "bdev_nvme_attach_controller" 00:29:00.273 }' 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:00.273 18:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:00.273 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:00.273 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:00.273 fio-3.35 00:29:00.273 Starting 2 threads 00:29:00.273 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.309 00:29:10.309 filename0: (groupid=0, jobs=1): err= 0: pid=4093822: Mon Jul 15 18:43:55 2024 00:29:10.309 read: IOPS=96, BW=384KiB/s (393kB/s)(3856KiB/10038msec) 00:29:10.309 slat (nsec): min=6002, max=63931, avg=8059.39, stdev=3347.36 00:29:10.309 clat (usec): min=40872, max=42787, avg=41625.69, stdev=482.17 00:29:10.309 lat (usec): min=40878, max=42851, avg=41633.75, stdev=482.56 00:29:10.309 clat percentiles (usec): 00:29:10.309 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:29:10.309 | 30.00th=[41157], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:29:10.309 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:29:10.309 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:29:10.309 | 99.99th=[42730] 00:29:10.309 bw ( KiB/s): min= 352, max= 416, per=49.98%, avg=384.00, stdev=14.68, samples=20 00:29:10.309 iops : min= 88, max= 104, avg=96.00, stdev= 3.67, samples=20 00:29:10.309 lat (msec) : 50=100.00% 00:29:10.309 cpu : usr=97.47%, sys=2.23%, ctx=38, majf=0, minf=90 00:29:10.309 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:10.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:10.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:10.309 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:10.309 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:10.309 filename1: (groupid=0, jobs=1): err= 0: pid=4093823: Mon Jul 15 18:43:55 2024 00:29:10.309 read: IOPS=96, BW=385KiB/s (394kB/s)(3856KiB/10022msec) 00:29:10.309 slat (nsec): min=5997, max=64500, avg=8226.48, stdev=3644.45 00:29:10.309 clat (usec): min=40751, max=42318, avg=41557.91, stdev=494.81 00:29:10.309 lat (usec): min=40765, max=42346, avg=41566.13, stdev=495.02 00:29:10.309 clat percentiles (usec): 00:29:10.309 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:29:10.309 | 30.00th=[41157], 40.00th=[41157], 50.00th=[42206], 60.00th=[42206], 00:29:10.309 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:29:10.309 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:10.309 | 99.99th=[42206] 00:29:10.309 bw ( KiB/s): min= 352, max= 416, per=49.98%, avg=384.00, stdev=10.38, samples=20 00:29:10.309 iops : min= 88, max= 104, avg=96.00, stdev= 2.60, samples=20 00:29:10.309 lat (msec) : 50=100.00% 00:29:10.309 cpu : usr=97.64%, sys=2.07%, ctx=12, majf=0, minf=237 00:29:10.309 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:10.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:10.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:10.309 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:10.309 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:10.309 00:29:10.309 Run status group 0 (all jobs): 00:29:10.309 READ: bw=768KiB/s (787kB/s), 384KiB/s-385KiB/s (393kB/s-394kB/s), io=7712KiB (7897kB), run=10022-10038msec 00:29:10.309 18:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:29:10.309 18:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:29:10.309 18:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:10.309 18:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:10.309 18:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:29:10.309 18:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:10.309 18:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.309 18:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:10.595 18:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.595 18:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:10.595 18:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.595 18:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:10.595 18:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.595 18:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:10.596 18:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:10.596 18:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:29:10.596 18:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:10.596 18:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.596 18:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:10.596 18:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.596 18:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:10.596 18:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.596 18:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:10.596 18:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.596 00:29:10.596 real 0m11.513s 00:29:10.596 user 0m26.490s 00:29:10.596 sys 0m0.714s 00:29:10.596 18:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:10.596 18:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:10.596 ************************************ 00:29:10.596 END TEST fio_dif_1_multi_subsystems 00:29:10.596 ************************************ 00:29:10.596 18:43:55 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:10.596 18:43:55 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:29:10.596 18:43:55 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:10.596 18:43:55 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:10.596 18:43:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:10.596 ************************************ 00:29:10.596 START TEST fio_dif_rand_params 00:29:10.596 ************************************ 00:29:10.596 18:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:29:10.596 18:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:29:10.596 18:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:29:10.596 18:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:29:10.596 18:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:29:10.596 18:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:29:10.596 18:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:29:10.596 18:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:29:10.596 18:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:29:10.596 18:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:10.596 18:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:10.596 18:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:10.596 18:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:10.596 18:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:10.596 18:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.596 18:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:10.596 bdev_null0 00:29:10.596 18:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.596 18:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:10.596 18:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.596 18:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:10.596 [2024-07-15 18:43:56.014942] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:10.596 { 00:29:10.596 "params": { 00:29:10.596 "name": "Nvme$subsystem", 00:29:10.596 "trtype": "$TEST_TRANSPORT", 00:29:10.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.596 "adrfam": "ipv4", 00:29:10.596 "trsvcid": "$NVMF_PORT", 00:29:10.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.596 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.596 "hdgst": ${hdgst:-false}, 00:29:10.596 "ddgst": ${ddgst:-false} 00:29:10.596 }, 00:29:10.596 "method": "bdev_nvme_attach_controller" 00:29:10.596 } 00:29:10.596 EOF 00:29:10.596 )") 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:10.596 "params": { 00:29:10.596 "name": "Nvme0", 00:29:10.596 "trtype": "tcp", 00:29:10.596 "traddr": "10.0.0.2", 00:29:10.596 "adrfam": "ipv4", 00:29:10.596 "trsvcid": "4420", 00:29:10.596 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:10.596 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:10.596 "hdgst": false, 00:29:10.596 "ddgst": false 00:29:10.596 }, 00:29:10.596 "method": "bdev_nvme_attach_controller" 00:29:10.596 }' 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:10.596 18:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:10.855 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:10.855 ... 00:29:10.855 fio-3.35 00:29:10.855 Starting 3 threads 00:29:11.113 EAL: No free 2048 kB hugepages reported on node 1 00:29:17.673 00:29:17.673 filename0: (groupid=0, jobs=1): err= 0: pid=4095805: Mon Jul 15 18:44:02 2024 00:29:17.673 read: IOPS=325, BW=40.7MiB/s (42.6MB/s)(204MiB/5007msec) 00:29:17.673 slat (nsec): min=6220, max=53826, avg=14914.85, stdev=7487.46 00:29:17.673 clat (usec): min=3106, max=52322, avg=9199.68, stdev=7159.33 00:29:17.673 lat (usec): min=3112, max=52352, avg=9214.60, stdev=7159.94 00:29:17.673 clat percentiles (usec): 00:29:17.673 | 1.00th=[ 3621], 5.00th=[ 5145], 10.00th=[ 5866], 20.00th=[ 6849], 00:29:17.673 | 30.00th=[ 7635], 40.00th=[ 8029], 50.00th=[ 8291], 60.00th=[ 8586], 00:29:17.673 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[ 9765], 95.00th=[10421], 00:29:17.673 | 99.00th=[49546], 99.50th=[50070], 99.90th=[51643], 99.95th=[52167], 00:29:17.673 | 99.99th=[52167] 00:29:17.673 bw ( KiB/s): min=31488, max=50688, per=34.06%, avg=41651.20, stdev=5723.13, samples=10 00:29:17.673 iops : min= 246, max= 396, avg=325.40, stdev=44.71, samples=10 00:29:17.673 lat (msec) : 4=2.76%, 10=89.87%, 20=4.42%, 50=2.39%, 100=0.55% 00:29:17.673 cpu : usr=95.86%, sys=3.80%, ctx=12, majf=0, minf=90 00:29:17.673 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:17.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:17.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:17.673 issued rwts: total=1629,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:17.673 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:17.673 filename0: (groupid=0, jobs=1): err= 0: pid=4095806: Mon Jul 15 18:44:02 2024 00:29:17.673 read: IOPS=316, BW=39.6MiB/s (41.5MB/s)(200MiB/5043msec) 00:29:17.673 slat (nsec): min=6053, max=42466, avg=14636.62, stdev=7031.19 00:29:17.673 clat (usec): min=3109, max=53204, avg=9431.15, stdev=7638.96 00:29:17.673 lat (usec): min=3116, max=53215, avg=9445.79, stdev=7638.70 00:29:17.673 clat percentiles (usec): 00:29:17.673 | 1.00th=[ 3228], 5.00th=[ 5342], 10.00th=[ 5866], 20.00th=[ 6718], 00:29:17.673 | 30.00th=[ 7701], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8717], 00:29:17.673 | 70.00th=[ 8979], 80.00th=[ 9372], 90.00th=[10028], 95.00th=[10552], 00:29:17.673 | 99.00th=[50594], 99.50th=[51119], 99.90th=[53216], 99.95th=[53216], 00:29:17.673 | 99.99th=[53216] 00:29:17.673 bw ( KiB/s): min=30976, max=45056, per=33.39%, avg=40832.00, stdev=5032.86, samples=10 00:29:17.673 iops : min= 242, max= 352, avg=319.00, stdev=39.32, samples=10 00:29:17.673 lat (msec) : 4=3.44%, 10=86.41%, 20=6.83%, 50=1.63%, 100=1.69% 00:29:17.673 cpu : usr=96.59%, sys=3.09%, ctx=11, majf=0, minf=110 00:29:17.673 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:17.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:17.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:17.673 issued rwts: total=1597,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:17.673 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:17.673 filename0: (groupid=0, jobs=1): err= 0: pid=4095807: Mon Jul 15 18:44:02 2024 00:29:17.673 read: IOPS=318, BW=39.8MiB/s (41.7MB/s)(199MiB/5004msec) 00:29:17.673 slat (nsec): min=6070, max=62656, avg=16824.20, stdev=9233.96 00:29:17.673 clat (usec): min=3585, max=51389, avg=9408.99, stdev=4989.75 00:29:17.673 lat (usec): min=3614, max=51408, avg=9425.82, stdev=4989.96 00:29:17.673 clat percentiles (usec): 00:29:17.673 | 1.00th=[ 5211], 5.00th=[ 5800], 10.00th=[ 6194], 20.00th=[ 6783], 00:29:17.673 | 30.00th=[ 7504], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[ 9765], 00:29:17.673 | 70.00th=[10159], 80.00th=[10814], 90.00th=[11469], 95.00th=[11994], 00:29:17.673 | 99.00th=[46924], 99.50th=[49546], 99.90th=[51119], 99.95th=[51643], 00:29:17.673 | 99.99th=[51643] 00:29:17.673 bw ( KiB/s): min=32768, max=46848, per=33.29%, avg=40704.00, stdev=3941.99, samples=10 00:29:17.673 iops : min= 256, max= 366, avg=318.00, stdev=30.80, samples=10 00:29:17.673 lat (msec) : 4=0.38%, 10=65.01%, 20=33.29%, 50=0.88%, 100=0.44% 00:29:17.673 cpu : usr=90.17%, sys=6.20%, ctx=439, majf=0, minf=174 00:29:17.673 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:17.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:17.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:17.673 issued rwts: total=1592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:17.673 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:17.673 00:29:17.673 Run status group 0 (all jobs): 00:29:17.673 READ: bw=119MiB/s (125MB/s), 39.6MiB/s-40.7MiB/s (41.5MB/s-42.6MB/s), io=602MiB (632MB), run=5004-5043msec 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:17.673 bdev_null0 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:17.673 [2024-07-15 18:44:02.301584] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:17.673 bdev_null1 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:17.673 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:17.674 bdev_null2 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:17.674 { 00:29:17.674 "params": { 00:29:17.674 "name": "Nvme$subsystem", 00:29:17.674 "trtype": "$TEST_TRANSPORT", 00:29:17.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.674 "adrfam": "ipv4", 00:29:17.674 "trsvcid": "$NVMF_PORT", 00:29:17.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.674 "hdgst": ${hdgst:-false}, 00:29:17.674 "ddgst": ${ddgst:-false} 00:29:17.674 }, 00:29:17.674 "method": "bdev_nvme_attach_controller" 00:29:17.674 } 00:29:17.674 EOF 00:29:17.674 )") 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:17.674 { 00:29:17.674 "params": { 00:29:17.674 "name": "Nvme$subsystem", 00:29:17.674 "trtype": "$TEST_TRANSPORT", 00:29:17.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.674 "adrfam": "ipv4", 00:29:17.674 "trsvcid": "$NVMF_PORT", 00:29:17.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.674 "hdgst": ${hdgst:-false}, 00:29:17.674 "ddgst": ${ddgst:-false} 00:29:17.674 }, 00:29:17.674 "method": "bdev_nvme_attach_controller" 00:29:17.674 } 00:29:17.674 EOF 00:29:17.674 )") 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:17.674 { 00:29:17.674 "params": { 00:29:17.674 "name": "Nvme$subsystem", 00:29:17.674 "trtype": "$TEST_TRANSPORT", 00:29:17.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.674 "adrfam": "ipv4", 00:29:17.674 "trsvcid": "$NVMF_PORT", 00:29:17.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.674 "hdgst": ${hdgst:-false}, 00:29:17.674 "ddgst": ${ddgst:-false} 00:29:17.674 }, 00:29:17.674 "method": "bdev_nvme_attach_controller" 00:29:17.674 } 00:29:17.674 EOF 00:29:17.674 )") 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:17.674 18:44:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:17.674 "params": { 00:29:17.674 "name": "Nvme0", 00:29:17.674 "trtype": "tcp", 00:29:17.674 "traddr": "10.0.0.2", 00:29:17.674 "adrfam": "ipv4", 00:29:17.674 "trsvcid": "4420", 00:29:17.674 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:17.674 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:17.674 "hdgst": false, 00:29:17.674 "ddgst": false 00:29:17.674 }, 00:29:17.675 "method": "bdev_nvme_attach_controller" 00:29:17.675 },{ 00:29:17.675 "params": { 00:29:17.675 "name": "Nvme1", 00:29:17.675 "trtype": "tcp", 00:29:17.675 "traddr": "10.0.0.2", 00:29:17.675 "adrfam": "ipv4", 00:29:17.675 "trsvcid": "4420", 00:29:17.675 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:17.675 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:17.675 "hdgst": false, 00:29:17.675 "ddgst": false 00:29:17.675 }, 00:29:17.675 "method": "bdev_nvme_attach_controller" 00:29:17.675 },{ 00:29:17.675 "params": { 00:29:17.675 "name": "Nvme2", 00:29:17.675 "trtype": "tcp", 00:29:17.675 "traddr": "10.0.0.2", 00:29:17.675 "adrfam": "ipv4", 00:29:17.675 "trsvcid": "4420", 00:29:17.675 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:17.675 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:17.675 "hdgst": false, 00:29:17.675 "ddgst": false 00:29:17.675 }, 00:29:17.675 "method": "bdev_nvme_attach_controller" 00:29:17.675 }' 00:29:17.675 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:17.675 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:17.675 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:17.675 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:17.675 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:17.675 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:17.675 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:17.675 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:17.675 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:17.675 18:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:17.675 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:17.675 ... 00:29:17.675 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:17.675 ... 00:29:17.675 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:17.675 ... 00:29:17.675 fio-3.35 00:29:17.675 Starting 24 threads 00:29:17.675 EAL: No free 2048 kB hugepages reported on node 1 00:29:29.877 00:29:29.877 filename0: (groupid=0, jobs=1): err= 0: pid=4097068: Mon Jul 15 18:44:13 2024 00:29:29.877 read: IOPS=612, BW=2450KiB/s (2509kB/s)(23.9MiB/10006msec) 00:29:29.877 slat (usec): min=7, max=224, avg=16.13, stdev=10.14 00:29:29.877 clat (usec): min=4338, max=43708, avg=25985.18, stdev=2740.58 00:29:29.877 lat (usec): min=4347, max=43932, avg=26001.31, stdev=2742.82 00:29:29.877 clat percentiles (usec): 00:29:29.877 | 1.00th=[10814], 5.00th=[23725], 10.00th=[24249], 20.00th=[24773], 00:29:29.877 | 30.00th=[24773], 40.00th=[25035], 50.00th=[26084], 60.00th=[26346], 00:29:29.877 | 70.00th=[26608], 80.00th=[27657], 90.00th=[29230], 95.00th=[30016], 00:29:29.877 | 99.00th=[30278], 99.50th=[30278], 99.90th=[42730], 99.95th=[43254], 00:29:29.877 | 99.99th=[43779] 00:29:29.877 bw ( KiB/s): min= 2176, max= 2810, per=4.22%, avg=2451.89, stdev=148.61, samples=19 00:29:29.877 iops : min= 544, max= 702, avg=612.95, stdev=37.08, samples=19 00:29:29.877 lat (msec) : 10=0.86%, 20=0.18%, 50=98.96% 00:29:29.877 cpu : usr=97.96%, sys=1.17%, ctx=237, majf=0, minf=50 00:29:29.877 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:29.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.877 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.877 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.877 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:29.877 filename0: (groupid=0, jobs=1): err= 0: pid=4097069: Mon Jul 15 18:44:13 2024 00:29:29.877 read: IOPS=604, BW=2418KiB/s (2476kB/s)(23.6MiB/10005msec) 00:29:29.877 slat (nsec): min=6667, max=91102, avg=39277.99, stdev=17458.71 00:29:29.877 clat (usec): min=22276, max=63227, avg=26102.61, stdev=2997.28 00:29:29.877 lat (usec): min=22305, max=63259, avg=26141.89, stdev=2997.98 00:29:29.877 clat percentiles (usec): 00:29:29.877 | 1.00th=[22676], 5.00th=[23725], 10.00th=[24249], 20.00th=[24249], 00:29:29.877 | 30.00th=[24511], 40.00th=[25035], 50.00th=[25822], 60.00th=[26084], 00:29:29.877 | 70.00th=[26346], 80.00th=[27395], 90.00th=[28967], 95.00th=[29492], 00:29:29.877 | 99.00th=[30278], 99.50th=[54789], 99.90th=[63177], 99.95th=[63177], 00:29:29.877 | 99.99th=[63177] 00:29:29.877 bw ( KiB/s): min= 2176, max= 2688, per=4.18%, avg=2425.26, stdev=131.33, samples=19 00:29:29.877 iops : min= 544, max= 672, avg=606.32, stdev=32.83, samples=19 00:29:29.877 lat (msec) : 50=99.47%, 100=0.53% 00:29:29.877 cpu : usr=98.42%, sys=0.89%, ctx=169, majf=0, minf=30 00:29:29.877 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:29.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.878 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.878 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.878 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:29.878 filename0: (groupid=0, jobs=1): err= 0: pid=4097070: Mon Jul 15 18:44:13 2024 00:29:29.878 read: IOPS=605, BW=2424KiB/s (2482kB/s)(23.8MiB/10034msec) 00:29:29.878 slat (nsec): min=11469, max=97322, avg=45782.87, stdev=17304.87 00:29:29.878 clat (usec): min=19438, max=58198, avg=25995.41, stdev=2611.34 00:29:29.878 lat (usec): min=19453, max=58242, avg=26041.19, stdev=2613.32 00:29:29.878 clat percentiles (usec): 00:29:29.878 | 1.00th=[22676], 5.00th=[23725], 10.00th=[23987], 20.00th=[24249], 00:29:29.878 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25822], 60.00th=[26084], 00:29:29.878 | 70.00th=[26346], 80.00th=[27395], 90.00th=[28705], 95.00th=[29492], 00:29:29.878 | 99.00th=[30016], 99.50th=[45351], 99.90th=[57934], 99.95th=[57934], 00:29:29.878 | 99.99th=[58459] 00:29:29.878 bw ( KiB/s): min= 2176, max= 2560, per=4.18%, avg=2425.60, stdev=120.90, samples=20 00:29:29.878 iops : min= 544, max= 640, avg=606.40, stdev=30.22, samples=20 00:29:29.878 lat (msec) : 20=0.26%, 50=99.47%, 100=0.26% 00:29:29.878 cpu : usr=98.51%, sys=0.88%, ctx=78, majf=0, minf=22 00:29:29.878 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:29.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.878 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.878 issued rwts: total=6080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.878 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:29.878 filename0: (groupid=0, jobs=1): err= 0: pid=4097071: Mon Jul 15 18:44:13 2024 00:29:29.878 read: IOPS=605, BW=2424KiB/s (2482kB/s)(23.8MiB/10034msec) 00:29:29.878 slat (nsec): min=7837, max=82080, avg=41785.85, stdev=14770.12 00:29:29.878 clat (usec): min=16222, max=57935, avg=26068.95, stdev=2625.81 00:29:29.878 lat (usec): min=16232, max=57960, avg=26110.74, stdev=2626.56 00:29:29.878 clat percentiles (usec): 00:29:29.878 | 1.00th=[22676], 5.00th=[23725], 10.00th=[23987], 20.00th=[24511], 00:29:29.878 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25822], 60.00th=[26084], 00:29:29.878 | 70.00th=[26346], 80.00th=[27657], 90.00th=[28967], 95.00th=[29754], 00:29:29.878 | 99.00th=[30278], 99.50th=[44827], 99.90th=[57410], 99.95th=[57934], 00:29:29.878 | 99.99th=[57934] 00:29:29.878 bw ( KiB/s): min= 2176, max= 2560, per=4.18%, avg=2425.60, stdev=120.90, samples=20 00:29:29.878 iops : min= 544, max= 640, avg=606.40, stdev=30.22, samples=20 00:29:29.878 lat (msec) : 20=0.30%, 50=99.44%, 100=0.26% 00:29:29.878 cpu : usr=98.60%, sys=0.83%, ctx=102, majf=0, minf=29 00:29:29.878 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:29.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.878 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.878 issued rwts: total=6080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.878 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:29.878 filename0: (groupid=0, jobs=1): err= 0: pid=4097072: Mon Jul 15 18:44:13 2024 00:29:29.878 read: IOPS=609, BW=2440KiB/s (2498kB/s)(23.9MiB/10047msec) 00:29:29.878 slat (nsec): min=8198, max=62651, avg=20280.06, stdev=6141.02 00:29:29.878 clat (usec): min=4729, max=54720, avg=26059.32, stdev=2929.29 00:29:29.878 lat (usec): min=4746, max=54737, avg=26079.60, stdev=2928.88 00:29:29.878 clat percentiles (usec): 00:29:29.878 | 1.00th=[21627], 5.00th=[23725], 10.00th=[24249], 20.00th=[24511], 00:29:29.878 | 30.00th=[24773], 40.00th=[25035], 50.00th=[26084], 60.00th=[26346], 00:29:29.878 | 70.00th=[26608], 80.00th=[27657], 90.00th=[29230], 95.00th=[30016], 00:29:29.878 | 99.00th=[30278], 99.50th=[38011], 99.90th=[54789], 99.95th=[54789], 00:29:29.878 | 99.99th=[54789] 00:29:29.878 bw ( KiB/s): min= 2176, max= 2810, per=4.21%, avg=2444.50, stdev=159.58, samples=20 00:29:29.878 iops : min= 544, max= 702, avg=611.10, stdev=39.83, samples=20 00:29:29.878 lat (msec) : 10=0.52%, 20=0.26%, 50=98.96%, 100=0.26% 00:29:29.878 cpu : usr=98.39%, sys=0.93%, ctx=104, majf=0, minf=38 00:29:29.878 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:29.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.878 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.878 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.878 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:29.878 filename0: (groupid=0, jobs=1): err= 0: pid=4097073: Mon Jul 15 18:44:13 2024 00:29:29.878 read: IOPS=604, BW=2416KiB/s (2474kB/s)(23.6MiB/10013msec) 00:29:29.878 slat (usec): min=6, max=134, avg=44.55, stdev=20.09 00:29:29.878 clat (usec): min=17897, max=73103, avg=26072.21, stdev=3151.09 00:29:29.878 lat (usec): min=17905, max=73121, avg=26116.76, stdev=3152.46 00:29:29.878 clat percentiles (usec): 00:29:29.878 | 1.00th=[22938], 5.00th=[23725], 10.00th=[23987], 20.00th=[24249], 00:29:29.878 | 30.00th=[24511], 40.00th=[25035], 50.00th=[25822], 60.00th=[26084], 00:29:29.878 | 70.00th=[26346], 80.00th=[27395], 90.00th=[28705], 95.00th=[29492], 00:29:29.878 | 99.00th=[30278], 99.50th=[57410], 99.90th=[64750], 99.95th=[64750], 00:29:29.878 | 99.99th=[72877] 00:29:29.878 bw ( KiB/s): min= 2176, max= 2560, per=4.15%, avg=2413.00, stdev=132.95, samples=20 00:29:29.878 iops : min= 544, max= 640, avg=603.25, stdev=33.24, samples=20 00:29:29.878 lat (msec) : 20=0.03%, 50=99.44%, 100=0.53% 00:29:29.878 cpu : usr=98.95%, sys=0.66%, ctx=19, majf=0, minf=22 00:29:29.878 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:29:29.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.878 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.878 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.878 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:29.878 filename0: (groupid=0, jobs=1): err= 0: pid=4097074: Mon Jul 15 18:44:13 2024 00:29:29.878 read: IOPS=604, BW=2418KiB/s (2476kB/s)(23.6MiB/10007msec) 00:29:29.878 slat (nsec): min=6259, max=74200, avg=35747.13, stdev=14436.62 00:29:29.878 clat (usec): min=22445, max=65389, avg=26198.34, stdev=3079.20 00:29:29.878 lat (usec): min=22464, max=65405, avg=26234.09, stdev=3078.23 00:29:29.878 clat percentiles (usec): 00:29:29.878 | 1.00th=[22938], 5.00th=[23725], 10.00th=[24249], 20.00th=[24511], 00:29:29.878 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25822], 60.00th=[26084], 00:29:29.878 | 70.00th=[26346], 80.00th=[27657], 90.00th=[28967], 95.00th=[29754], 00:29:29.878 | 99.00th=[30278], 99.50th=[54789], 99.90th=[65274], 99.95th=[65274], 00:29:29.878 | 99.99th=[65274] 00:29:29.878 bw ( KiB/s): min= 2176, max= 2560, per=4.15%, avg=2412.80, stdev=133.12, samples=20 00:29:29.878 iops : min= 544, max= 640, avg=603.20, stdev=33.28, samples=20 00:29:29.878 lat (msec) : 50=99.47%, 100=0.53% 00:29:29.878 cpu : usr=98.01%, sys=1.24%, ctx=118, majf=0, minf=31 00:29:29.878 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:29.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.878 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.878 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.878 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:29.878 filename0: (groupid=0, jobs=1): err= 0: pid=4097075: Mon Jul 15 18:44:13 2024 00:29:29.878 read: IOPS=604, BW=2419KiB/s (2477kB/s)(23.7MiB/10028msec) 00:29:29.878 slat (nsec): min=7500, max=70301, avg=24936.27, stdev=11158.23 00:29:29.878 clat (usec): min=22464, max=62600, avg=26265.12, stdev=2981.14 00:29:29.878 lat (usec): min=22480, max=62621, avg=26290.06, stdev=2980.30 00:29:29.878 clat percentiles (usec): 00:29:29.878 | 1.00th=[22938], 5.00th=[23987], 10.00th=[24249], 20.00th=[24511], 00:29:29.878 | 30.00th=[24773], 40.00th=[25035], 50.00th=[26084], 60.00th=[26346], 00:29:29.879 | 70.00th=[26608], 80.00th=[27657], 90.00th=[28967], 95.00th=[30016], 00:29:29.879 | 99.00th=[30278], 99.50th=[54789], 99.90th=[62653], 99.95th=[62653], 00:29:29.879 | 99.99th=[62653] 00:29:29.879 bw ( KiB/s): min= 2304, max= 2688, per=4.17%, avg=2419.20, stdev=109.09, samples=20 00:29:29.879 iops : min= 576, max= 672, avg=604.80, stdev=27.27, samples=20 00:29:29.879 lat (msec) : 50=99.47%, 100=0.53% 00:29:29.879 cpu : usr=98.87%, sys=0.75%, ctx=11, majf=0, minf=39 00:29:29.879 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:29.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.879 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.879 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.879 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:29.879 filename1: (groupid=0, jobs=1): err= 0: pid=4097076: Mon Jul 15 18:44:13 2024 00:29:29.879 read: IOPS=605, BW=2424KiB/s (2482kB/s)(23.8MiB/10034msec) 00:29:29.879 slat (nsec): min=7670, max=82542, avg=38993.97, stdev=15927.51 00:29:29.879 clat (usec): min=19466, max=58142, avg=26102.97, stdev=2615.44 00:29:29.879 lat (usec): min=19481, max=58168, avg=26141.97, stdev=2616.56 00:29:29.879 clat percentiles (usec): 00:29:29.879 | 1.00th=[22676], 5.00th=[23987], 10.00th=[24249], 20.00th=[24511], 00:29:29.879 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25822], 60.00th=[26084], 00:29:29.879 | 70.00th=[26346], 80.00th=[27657], 90.00th=[28967], 95.00th=[29754], 00:29:29.879 | 99.00th=[30016], 99.50th=[44827], 99.90th=[57934], 99.95th=[57934], 00:29:29.879 | 99.99th=[57934] 00:29:29.879 bw ( KiB/s): min= 2176, max= 2560, per=4.18%, avg=2425.60, stdev=120.90, samples=20 00:29:29.879 iops : min= 544, max= 640, avg=606.40, stdev=30.22, samples=20 00:29:29.879 lat (msec) : 20=0.26%, 50=99.47%, 100=0.26% 00:29:29.879 cpu : usr=98.78%, sys=0.83%, ctx=17, majf=0, minf=40 00:29:29.879 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:29.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.879 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.879 issued rwts: total=6080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.879 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:29.879 filename1: (groupid=0, jobs=1): err= 0: pid=4097077: Mon Jul 15 18:44:13 2024 00:29:29.879 read: IOPS=614, BW=2458KiB/s (2517kB/s)(24.1MiB/10050msec) 00:29:29.879 slat (nsec): min=7393, max=47835, avg=16526.19, stdev=5541.72 00:29:29.879 clat (usec): min=1284, max=54527, avg=25898.13, stdev=3632.10 00:29:29.879 lat (usec): min=1298, max=54549, avg=25914.65, stdev=3631.89 00:29:29.879 clat percentiles (usec): 00:29:29.879 | 1.00th=[ 5080], 5.00th=[23462], 10.00th=[24249], 20.00th=[24511], 00:29:29.879 | 30.00th=[24773], 40.00th=[25035], 50.00th=[26084], 60.00th=[26346], 00:29:29.879 | 70.00th=[26608], 80.00th=[27657], 90.00th=[29230], 95.00th=[30016], 00:29:29.879 | 99.00th=[30278], 99.50th=[38011], 99.90th=[54264], 99.95th=[54264], 00:29:29.879 | 99.99th=[54789] 00:29:29.879 bw ( KiB/s): min= 2176, max= 3200, per=4.24%, avg=2464.00, stdev=215.29, samples=20 00:29:29.879 iops : min= 544, max= 800, avg=616.00, stdev=53.82, samples=20 00:29:29.879 lat (msec) : 2=0.37%, 4=0.15%, 10=1.04%, 20=0.03%, 50=98.15% 00:29:29.879 lat (msec) : 100=0.26% 00:29:29.879 cpu : usr=97.91%, sys=1.22%, ctx=232, majf=0, minf=66 00:29:29.879 IO depths : 1=6.2%, 2=12.3%, 4=24.7%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:29:29.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.879 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.879 issued rwts: total=6176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.879 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:29.879 filename1: (groupid=0, jobs=1): err= 0: pid=4097078: Mon Jul 15 18:44:13 2024 00:29:29.879 read: IOPS=609, BW=2438KiB/s (2497kB/s)(23.9MiB/10053msec) 00:29:29.879 slat (nsec): min=7389, max=52194, avg=18384.75, stdev=5828.95 00:29:29.879 clat (usec): min=5018, max=58163, avg=26107.43, stdev=2964.85 00:29:29.879 lat (usec): min=5038, max=58186, avg=26125.81, stdev=2964.82 00:29:29.879 clat percentiles (usec): 00:29:29.879 | 1.00th=[21890], 5.00th=[23725], 10.00th=[24249], 20.00th=[24773], 00:29:29.879 | 30.00th=[24773], 40.00th=[25035], 50.00th=[26084], 60.00th=[26346], 00:29:29.879 | 70.00th=[26608], 80.00th=[27657], 90.00th=[29230], 95.00th=[30016], 00:29:29.879 | 99.00th=[30278], 99.50th=[38011], 99.90th=[54789], 99.95th=[54789], 00:29:29.879 | 99.99th=[57934] 00:29:29.879 bw ( KiB/s): min= 2176, max= 2810, per=4.21%, avg=2444.50, stdev=159.07, samples=20 00:29:29.879 iops : min= 544, max= 702, avg=611.10, stdev=39.71, samples=20 00:29:29.879 lat (msec) : 10=0.52%, 20=0.29%, 50=98.92%, 100=0.26% 00:29:29.879 cpu : usr=99.02%, sys=0.61%, ctx=13, majf=0, minf=35 00:29:29.879 IO depths : 1=0.2%, 2=6.3%, 4=24.8%, 8=56.4%, 16=12.3%, 32=0.0%, >=64=0.0% 00:29:29.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.879 complete : 0=0.0%, 4=94.4%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.879 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.879 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:29.879 filename1: (groupid=0, jobs=1): err= 0: pid=4097079: Mon Jul 15 18:44:13 2024 00:29:29.879 read: IOPS=605, BW=2424KiB/s (2482kB/s)(23.8MiB/10034msec) 00:29:29.879 slat (usec): min=7, max=172, avg=43.76, stdev=21.26 00:29:29.879 clat (usec): min=19731, max=58100, avg=26031.94, stdev=2624.37 00:29:29.879 lat (usec): min=19742, max=58138, avg=26075.70, stdev=2624.74 00:29:29.879 clat percentiles (usec): 00:29:29.879 | 1.00th=[22676], 5.00th=[23725], 10.00th=[23987], 20.00th=[24249], 00:29:29.879 | 30.00th=[24511], 40.00th=[25035], 50.00th=[25822], 60.00th=[26084], 00:29:29.879 | 70.00th=[26346], 80.00th=[27395], 90.00th=[28967], 95.00th=[29754], 00:29:29.879 | 99.00th=[30278], 99.50th=[44827], 99.90th=[57934], 99.95th=[57934], 00:29:29.879 | 99.99th=[57934] 00:29:29.879 bw ( KiB/s): min= 2176, max= 2560, per=4.18%, avg=2425.60, stdev=120.90, samples=20 00:29:29.879 iops : min= 544, max= 640, avg=606.40, stdev=30.22, samples=20 00:29:29.879 lat (msec) : 20=0.26%, 50=99.47%, 100=0.26% 00:29:29.879 cpu : usr=98.78%, sys=0.78%, ctx=40, majf=0, minf=37 00:29:29.879 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:29.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.879 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.879 issued rwts: total=6080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.879 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:29.879 filename1: (groupid=0, jobs=1): err= 0: pid=4097080: Mon Jul 15 18:44:13 2024 00:29:29.879 read: IOPS=604, BW=2416KiB/s (2474kB/s)(23.6MiB/10013msec) 00:29:29.879 slat (nsec): min=7793, max=78632, avg=37639.09, stdev=14341.56 00:29:29.879 clat (usec): min=22481, max=71026, avg=26184.03, stdev=3276.08 00:29:29.879 lat (usec): min=22500, max=71046, avg=26221.66, stdev=3275.38 00:29:29.879 clat percentiles (usec): 00:29:29.879 | 1.00th=[22938], 5.00th=[23725], 10.00th=[24249], 20.00th=[24511], 00:29:29.879 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25822], 60.00th=[26084], 00:29:29.879 | 70.00th=[26346], 80.00th=[27657], 90.00th=[28967], 95.00th=[29754], 00:29:29.879 | 99.00th=[30278], 99.50th=[54789], 99.90th=[70779], 99.95th=[70779], 00:29:29.879 | 99.99th=[70779] 00:29:29.879 bw ( KiB/s): min= 2176, max= 2560, per=4.15%, avg=2413.00, stdev=132.95, samples=20 00:29:29.879 iops : min= 544, max= 640, avg=603.25, stdev=33.24, samples=20 00:29:29.879 lat (msec) : 50=99.47%, 100=0.53% 00:29:29.879 cpu : usr=98.42%, sys=0.93%, ctx=140, majf=0, minf=36 00:29:29.879 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:29.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.879 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.879 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.879 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:29.879 filename1: (groupid=0, jobs=1): err= 0: pid=4097081: Mon Jul 15 18:44:13 2024 00:29:29.879 read: IOPS=605, BW=2424KiB/s (2482kB/s)(23.8MiB/10034msec) 00:29:29.879 slat (nsec): min=6409, max=98870, avg=47037.32, stdev=18655.83 00:29:29.879 clat (usec): min=12918, max=58158, avg=25970.26, stdev=2672.60 00:29:29.879 lat (usec): min=12929, max=58201, avg=26017.30, stdev=2674.52 00:29:29.879 clat percentiles (usec): 00:29:29.880 | 1.00th=[22676], 5.00th=[23725], 10.00th=[23987], 20.00th=[24249], 00:29:29.880 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25822], 60.00th=[26084], 00:29:29.880 | 70.00th=[26346], 80.00th=[27395], 90.00th=[28705], 95.00th=[29492], 00:29:29.880 | 99.00th=[30278], 99.50th=[45351], 99.90th=[57934], 99.95th=[57934], 00:29:29.880 | 99.99th=[57934] 00:29:29.880 bw ( KiB/s): min= 2176, max= 2560, per=4.18%, avg=2425.60, stdev=120.90, samples=20 00:29:29.880 iops : min= 544, max= 640, avg=606.40, stdev=30.22, samples=20 00:29:29.880 lat (msec) : 20=0.36%, 50=99.38%, 100=0.26% 00:29:29.880 cpu : usr=98.90%, sys=0.68%, ctx=26, majf=0, minf=30 00:29:29.880 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:29:29.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.880 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.880 issued rwts: total=6080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.880 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:29.880 filename1: (groupid=0, jobs=1): err= 0: pid=4097082: Mon Jul 15 18:44:13 2024 00:29:29.880 read: IOPS=605, BW=2424KiB/s (2482kB/s)(23.8MiB/10034msec) 00:29:29.880 slat (nsec): min=4956, max=86241, avg=43965.71, stdev=13392.13 00:29:29.880 clat (usec): min=19426, max=58182, avg=26024.46, stdev=2632.02 00:29:29.880 lat (usec): min=19433, max=58228, avg=26068.42, stdev=2632.57 00:29:29.880 clat percentiles (usec): 00:29:29.880 | 1.00th=[22676], 5.00th=[23725], 10.00th=[23987], 20.00th=[24249], 00:29:29.880 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25822], 60.00th=[26084], 00:29:29.880 | 70.00th=[26346], 80.00th=[27395], 90.00th=[28967], 95.00th=[29492], 00:29:29.880 | 99.00th=[30016], 99.50th=[45351], 99.90th=[57934], 99.95th=[57934], 00:29:29.880 | 99.99th=[57934] 00:29:29.880 bw ( KiB/s): min= 2176, max= 2560, per=4.18%, avg=2425.60, stdev=120.90, samples=20 00:29:29.880 iops : min= 544, max= 640, avg=606.40, stdev=30.22, samples=20 00:29:29.880 lat (msec) : 20=0.26%, 50=99.47%, 100=0.26% 00:29:29.880 cpu : usr=98.54%, sys=0.78%, ctx=182, majf=0, minf=31 00:29:29.880 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:29.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.880 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.880 issued rwts: total=6080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.880 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:29.880 filename1: (groupid=0, jobs=1): err= 0: pid=4097083: Mon Jul 15 18:44:13 2024 00:29:29.880 read: IOPS=603, BW=2416KiB/s (2474kB/s)(23.6MiB/10014msec) 00:29:29.880 slat (nsec): min=6378, max=91200, avg=40304.89, stdev=16768.70 00:29:29.880 clat (usec): min=22209, max=71577, avg=26113.16, stdev=3289.79 00:29:29.880 lat (usec): min=22219, max=71599, avg=26153.47, stdev=3290.18 00:29:29.880 clat percentiles (usec): 00:29:29.880 | 1.00th=[22938], 5.00th=[23725], 10.00th=[23987], 20.00th=[24249], 00:29:29.880 | 30.00th=[24511], 40.00th=[25035], 50.00th=[25822], 60.00th=[26084], 00:29:29.880 | 70.00th=[26346], 80.00th=[27395], 90.00th=[28967], 95.00th=[29492], 00:29:29.880 | 99.00th=[30016], 99.50th=[54789], 99.90th=[71828], 99.95th=[71828], 00:29:29.880 | 99.99th=[71828] 00:29:29.880 bw ( KiB/s): min= 2176, max= 2560, per=4.15%, avg=2412.80, stdev=133.12, samples=20 00:29:29.880 iops : min= 544, max= 640, avg=603.20, stdev=33.28, samples=20 00:29:29.880 lat (msec) : 50=99.47%, 100=0.53% 00:29:29.880 cpu : usr=99.03%, sys=0.54%, ctx=40, majf=0, minf=38 00:29:29.880 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:29.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.880 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.880 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.880 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:29.880 filename2: (groupid=0, jobs=1): err= 0: pid=4097084: Mon Jul 15 18:44:13 2024 00:29:29.880 read: IOPS=609, BW=2439KiB/s (2498kB/s)(23.9MiB/10050msec) 00:29:29.880 slat (usec): min=6, max=169, avg=45.57, stdev=19.91 00:29:29.880 clat (usec): min=5018, max=57936, avg=25866.43, stdev=3052.59 00:29:29.880 lat (usec): min=5038, max=57961, avg=25912.00, stdev=3055.49 00:29:29.880 clat percentiles (usec): 00:29:29.880 | 1.00th=[21890], 5.00th=[23725], 10.00th=[23987], 20.00th=[24249], 00:29:29.880 | 30.00th=[24511], 40.00th=[25035], 50.00th=[25822], 60.00th=[26084], 00:29:29.880 | 70.00th=[26346], 80.00th=[27395], 90.00th=[28967], 95.00th=[29492], 00:29:29.880 | 99.00th=[30278], 99.50th=[38011], 99.90th=[57934], 99.95th=[57934], 00:29:29.880 | 99.99th=[57934] 00:29:29.880 bw ( KiB/s): min= 2176, max= 2816, per=4.21%, avg=2444.80, stdev=160.30, samples=20 00:29:29.880 iops : min= 544, max= 704, avg=611.20, stdev=40.08, samples=20 00:29:29.880 lat (msec) : 10=0.78%, 20=0.03%, 50=98.92%, 100=0.26% 00:29:29.880 cpu : usr=99.03%, sys=0.58%, ctx=30, majf=0, minf=37 00:29:29.880 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:29.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.880 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.880 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.880 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:29.880 filename2: (groupid=0, jobs=1): err= 0: pid=4097085: Mon Jul 15 18:44:13 2024 00:29:29.880 read: IOPS=605, BW=2424KiB/s (2482kB/s)(23.8MiB/10034msec) 00:29:29.880 slat (nsec): min=7645, max=96258, avg=47493.76, stdev=18845.06 00:29:29.880 clat (usec): min=19479, max=58158, avg=25989.95, stdev=2610.47 00:29:29.880 lat (usec): min=19499, max=58187, avg=26037.44, stdev=2612.69 00:29:29.880 clat percentiles (usec): 00:29:29.880 | 1.00th=[22676], 5.00th=[23725], 10.00th=[23987], 20.00th=[24249], 00:29:29.880 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25822], 60.00th=[26084], 00:29:29.880 | 70.00th=[26346], 80.00th=[27395], 90.00th=[28705], 95.00th=[29492], 00:29:29.880 | 99.00th=[30016], 99.50th=[44827], 99.90th=[57934], 99.95th=[57934], 00:29:29.880 | 99.99th=[57934] 00:29:29.880 bw ( KiB/s): min= 2176, max= 2560, per=4.18%, avg=2425.60, stdev=120.90, samples=20 00:29:29.880 iops : min= 544, max= 640, avg=606.40, stdev=30.22, samples=20 00:29:29.880 lat (msec) : 20=0.26%, 50=99.47%, 100=0.26% 00:29:29.880 cpu : usr=98.30%, sys=1.02%, ctx=338, majf=0, minf=26 00:29:29.880 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:29.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.880 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.880 issued rwts: total=6080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.880 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:29.880 filename2: (groupid=0, jobs=1): err= 0: pid=4097086: Mon Jul 15 18:44:13 2024 00:29:29.880 read: IOPS=604, BW=2417KiB/s (2475kB/s)(23.6MiB/10008msec) 00:29:29.880 slat (nsec): min=5826, max=74301, avg=37342.17, stdev=14376.15 00:29:29.880 clat (usec): min=22454, max=65985, avg=26178.73, stdev=3099.85 00:29:29.880 lat (usec): min=22471, max=66001, avg=26216.07, stdev=3099.12 00:29:29.880 clat percentiles (usec): 00:29:29.880 | 1.00th=[22938], 5.00th=[23725], 10.00th=[24249], 20.00th=[24511], 00:29:29.880 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25822], 60.00th=[26084], 00:29:29.880 | 70.00th=[26346], 80.00th=[27657], 90.00th=[28967], 95.00th=[29754], 00:29:29.880 | 99.00th=[30278], 99.50th=[54789], 99.90th=[65799], 99.95th=[65799], 00:29:29.880 | 99.99th=[65799] 00:29:29.880 bw ( KiB/s): min= 2176, max= 2560, per=4.15%, avg=2412.80, stdev=133.12, samples=20 00:29:29.880 iops : min= 544, max= 640, avg=603.20, stdev=33.28, samples=20 00:29:29.880 lat (msec) : 50=99.47%, 100=0.53% 00:29:29.880 cpu : usr=98.96%, sys=0.67%, ctx=11, majf=0, minf=21 00:29:29.880 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:29.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.880 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.880 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.880 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:29.880 filename2: (groupid=0, jobs=1): err= 0: pid=4097087: Mon Jul 15 18:44:13 2024 00:29:29.880 read: IOPS=605, BW=2424KiB/s (2482kB/s)(23.8MiB/10034msec) 00:29:29.880 slat (nsec): min=10083, max=86934, avg=43023.14, stdev=14027.12 00:29:29.880 clat (usec): min=11974, max=58180, avg=26043.39, stdev=2651.10 00:29:29.880 lat (usec): min=11985, max=58221, avg=26086.41, stdev=2652.15 00:29:29.880 clat percentiles (usec): 00:29:29.880 | 1.00th=[22676], 5.00th=[23725], 10.00th=[23987], 20.00th=[24249], 00:29:29.880 | 30.00th=[24511], 40.00th=[25035], 50.00th=[25822], 60.00th=[26084], 00:29:29.881 | 70.00th=[26346], 80.00th=[27657], 90.00th=[28967], 95.00th=[29492], 00:29:29.881 | 99.00th=[30016], 99.50th=[44827], 99.90th=[57934], 99.95th=[57934], 00:29:29.881 | 99.99th=[57934] 00:29:29.881 bw ( KiB/s): min= 2176, max= 2560, per=4.18%, avg=2425.60, stdev=120.90, samples=20 00:29:29.881 iops : min= 544, max= 640, avg=606.40, stdev=30.22, samples=20 00:29:29.881 lat (msec) : 20=0.26%, 50=99.44%, 100=0.30% 00:29:29.881 cpu : usr=98.61%, sys=0.86%, ctx=58, majf=0, minf=41 00:29:29.881 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:29.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.881 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.881 issued rwts: total=6080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:29.881 filename2: (groupid=0, jobs=1): err= 0: pid=4097088: Mon Jul 15 18:44:13 2024 00:29:29.881 read: IOPS=603, BW=2416KiB/s (2474kB/s)(23.6MiB/10014msec) 00:29:29.881 slat (usec): min=6, max=103, avg=46.16, stdev=19.35 00:29:29.881 clat (usec): min=22324, max=68794, avg=26052.06, stdev=3256.63 00:29:29.881 lat (usec): min=22333, max=68811, avg=26098.22, stdev=3257.22 00:29:29.881 clat percentiles (usec): 00:29:29.881 | 1.00th=[22676], 5.00th=[23725], 10.00th=[23987], 20.00th=[24249], 00:29:29.881 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25822], 60.00th=[26084], 00:29:29.881 | 70.00th=[26346], 80.00th=[27395], 90.00th=[28705], 95.00th=[29492], 00:29:29.881 | 99.00th=[30016], 99.50th=[57410], 99.90th=[68682], 99.95th=[68682], 00:29:29.881 | 99.99th=[68682] 00:29:29.881 bw ( KiB/s): min= 2176, max= 2560, per=4.15%, avg=2412.80, stdev=133.12, samples=20 00:29:29.881 iops : min= 544, max= 640, avg=603.20, stdev=33.28, samples=20 00:29:29.881 lat (msec) : 50=99.47%, 100=0.53% 00:29:29.881 cpu : usr=98.96%, sys=0.65%, ctx=34, majf=0, minf=23 00:29:29.881 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:29.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.881 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.881 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:29.881 filename2: (groupid=0, jobs=1): err= 0: pid=4097089: Mon Jul 15 18:44:13 2024 00:29:29.881 read: IOPS=604, BW=2418KiB/s (2476kB/s)(23.7MiB/10030msec) 00:29:29.881 slat (usec): min=6, max=113, avg=18.07, stdev=12.11 00:29:29.881 clat (usec): min=13173, max=69149, avg=26275.94, stdev=3338.42 00:29:29.881 lat (usec): min=13183, max=69180, avg=26294.01, stdev=3338.59 00:29:29.881 clat percentiles (usec): 00:29:29.881 | 1.00th=[22676], 5.00th=[23987], 10.00th=[24249], 20.00th=[24511], 00:29:29.881 | 30.00th=[24773], 40.00th=[25035], 50.00th=[26084], 60.00th=[26346], 00:29:29.881 | 70.00th=[26608], 80.00th=[27919], 90.00th=[29230], 95.00th=[29754], 00:29:29.881 | 99.00th=[38011], 99.50th=[50070], 99.90th=[68682], 99.95th=[68682], 00:29:29.881 | 99.99th=[68682] 00:29:29.881 bw ( KiB/s): min= 2176, max= 2560, per=4.17%, avg=2419.20, stdev=124.00, samples=20 00:29:29.881 iops : min= 544, max= 640, avg=604.80, stdev=31.00, samples=20 00:29:29.881 lat (msec) : 20=0.59%, 50=98.90%, 100=0.51% 00:29:29.881 cpu : usr=98.46%, sys=0.98%, ctx=79, majf=0, minf=51 00:29:29.881 IO depths : 1=5.1%, 2=11.3%, 4=25.0%, 8=51.2%, 16=7.4%, 32=0.0%, >=64=0.0% 00:29:29.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.881 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.881 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:29.881 filename2: (groupid=0, jobs=1): err= 0: pid=4097090: Mon Jul 15 18:44:13 2024 00:29:29.881 read: IOPS=604, BW=2418KiB/s (2476kB/s)(23.6MiB/10005msec) 00:29:29.881 slat (nsec): min=6761, max=89768, avg=39637.69, stdev=16770.43 00:29:29.881 clat (usec): min=22184, max=63116, avg=26105.53, stdev=2991.28 00:29:29.881 lat (usec): min=22206, max=63136, avg=26145.16, stdev=2992.20 00:29:29.881 clat percentiles (usec): 00:29:29.881 | 1.00th=[22676], 5.00th=[23725], 10.00th=[24249], 20.00th=[24511], 00:29:29.881 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25822], 60.00th=[26084], 00:29:29.881 | 70.00th=[26346], 80.00th=[27395], 90.00th=[28967], 95.00th=[29754], 00:29:29.881 | 99.00th=[30278], 99.50th=[54789], 99.90th=[63177], 99.95th=[63177], 00:29:29.881 | 99.99th=[63177] 00:29:29.881 bw ( KiB/s): min= 2176, max= 2688, per=4.18%, avg=2425.26, stdev=131.33, samples=19 00:29:29.881 iops : min= 544, max= 672, avg=606.32, stdev=32.83, samples=19 00:29:29.881 lat (msec) : 50=99.47%, 100=0.53% 00:29:29.881 cpu : usr=98.79%, sys=0.81%, ctx=15, majf=0, minf=40 00:29:29.881 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:29.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.881 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.881 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:29.881 filename2: (groupid=0, jobs=1): err= 0: pid=4097091: Mon Jul 15 18:44:13 2024 00:29:29.881 read: IOPS=605, BW=2421KiB/s (2479kB/s)(23.7MiB/10012msec) 00:29:29.881 slat (nsec): min=6431, max=72151, avg=16998.92, stdev=12264.77 00:29:29.881 clat (usec): min=10100, max=88341, avg=26365.80, stdev=3547.40 00:29:29.881 lat (usec): min=10117, max=88390, avg=26382.79, stdev=3548.42 00:29:29.881 clat percentiles (usec): 00:29:29.881 | 1.00th=[21890], 5.00th=[23987], 10.00th=[24249], 20.00th=[24773], 00:29:29.881 | 30.00th=[24773], 40.00th=[25035], 50.00th=[26346], 60.00th=[26346], 00:29:29.881 | 70.00th=[26608], 80.00th=[27919], 90.00th=[29230], 95.00th=[30016], 00:29:29.881 | 99.00th=[30540], 99.50th=[44303], 99.90th=[76022], 99.95th=[76022], 00:29:29.881 | 99.99th=[88605] 00:29:29.881 bw ( KiB/s): min= 2176, max= 2656, per=4.16%, avg=2417.80, stdev=143.94, samples=20 00:29:29.881 iops : min= 544, max= 664, avg=604.45, stdev=35.99, samples=20 00:29:29.881 lat (msec) : 20=0.69%, 50=98.94%, 100=0.36% 00:29:29.881 cpu : usr=97.06%, sys=1.63%, ctx=259, majf=0, minf=55 00:29:29.881 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=81.0%, 16=18.6%, 32=0.0%, >=64=0.0% 00:29:29.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.881 complete : 0=0.0%, 4=89.5%, 8=10.4%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.881 issued rwts: total=6060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:29.881 00:29:29.881 Run status group 0 (all jobs): 00:29:29.881 READ: bw=56.7MiB/s (59.4MB/s), 2416KiB/s-2458KiB/s (2474kB/s-2517kB/s), io=570MiB (598MB), run=10005-10053msec 00:29:29.881 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:29:29.881 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:29.881 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:29.881 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:29.881 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:29.881 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:29.881 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.881 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.881 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.881 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:29.881 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.881 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.881 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.881 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:29.881 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:29.881 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:29.881 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:29.881 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.881 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.881 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.881 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:29.881 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.882 bdev_null0 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.882 [2024-07-15 18:44:14.255732] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.882 bdev_null1 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:29.882 { 00:29:29.882 "params": { 00:29:29.882 "name": "Nvme$subsystem", 00:29:29.882 "trtype": "$TEST_TRANSPORT", 00:29:29.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.882 "adrfam": "ipv4", 00:29:29.882 "trsvcid": "$NVMF_PORT", 00:29:29.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.882 "hdgst": ${hdgst:-false}, 00:29:29.882 "ddgst": ${ddgst:-false} 00:29:29.882 }, 00:29:29.882 "method": "bdev_nvme_attach_controller" 00:29:29.882 } 00:29:29.882 EOF 00:29:29.882 )") 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:29.882 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:29.883 18:44:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:29.883 { 00:29:29.883 "params": { 00:29:29.883 "name": "Nvme$subsystem", 00:29:29.883 "trtype": "$TEST_TRANSPORT", 00:29:29.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.883 "adrfam": "ipv4", 00:29:29.883 "trsvcid": "$NVMF_PORT", 00:29:29.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.883 "hdgst": ${hdgst:-false}, 00:29:29.883 "ddgst": ${ddgst:-false} 00:29:29.883 }, 00:29:29.883 "method": "bdev_nvme_attach_controller" 00:29:29.883 } 00:29:29.883 EOF 00:29:29.883 )") 00:29:29.883 18:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:29.883 18:44:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:29.883 18:44:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:29.883 18:44:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:29.883 18:44:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:29.883 "params": { 00:29:29.883 "name": "Nvme0", 00:29:29.883 "trtype": "tcp", 00:29:29.883 "traddr": "10.0.0.2", 00:29:29.883 "adrfam": "ipv4", 00:29:29.883 "trsvcid": "4420", 00:29:29.883 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:29.883 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:29.883 "hdgst": false, 00:29:29.883 "ddgst": false 00:29:29.883 }, 00:29:29.883 "method": "bdev_nvme_attach_controller" 00:29:29.883 },{ 00:29:29.883 "params": { 00:29:29.883 "name": "Nvme1", 00:29:29.883 "trtype": "tcp", 00:29:29.883 "traddr": "10.0.0.2", 00:29:29.883 "adrfam": "ipv4", 00:29:29.883 "trsvcid": "4420", 00:29:29.883 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:29.883 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:29.883 "hdgst": false, 00:29:29.883 "ddgst": false 00:29:29.883 }, 00:29:29.883 "method": "bdev_nvme_attach_controller" 00:29:29.883 }' 00:29:29.883 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:29.883 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:29.883 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:29.883 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:29.883 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:29.883 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:29.883 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:29.883 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:29.883 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:29.883 18:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:29.883 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:29.883 ... 00:29:29.883 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:29.883 ... 00:29:29.883 fio-3.35 00:29:29.883 Starting 4 threads 00:29:29.883 EAL: No free 2048 kB hugepages reported on node 1 00:29:35.151 00:29:35.151 filename0: (groupid=0, jobs=1): err= 0: pid=4099043: Mon Jul 15 18:44:20 2024 00:29:35.151 read: IOPS=2697, BW=21.1MiB/s (22.1MB/s)(105MiB/5002msec) 00:29:35.151 slat (nsec): min=5872, max=83483, avg=17817.05, stdev=11794.90 00:29:35.151 clat (usec): min=591, max=5698, avg=2910.64, stdev=473.94 00:29:35.151 lat (usec): min=602, max=5708, avg=2928.46, stdev=475.17 00:29:35.151 clat percentiles (usec): 00:29:35.151 | 1.00th=[ 1631], 5.00th=[ 2180], 10.00th=[ 2343], 20.00th=[ 2606], 00:29:35.151 | 30.00th=[ 2769], 40.00th=[ 2835], 50.00th=[ 2900], 60.00th=[ 2966], 00:29:35.151 | 70.00th=[ 3064], 80.00th=[ 3195], 90.00th=[ 3392], 95.00th=[ 3654], 00:29:35.151 | 99.00th=[ 4359], 99.50th=[ 4752], 99.90th=[ 5211], 99.95th=[ 5538], 00:29:35.151 | 99.99th=[ 5669] 00:29:35.151 bw ( KiB/s): min=19952, max=23664, per=25.42%, avg=21676.44, stdev=1261.23, samples=9 00:29:35.151 iops : min= 2494, max= 2958, avg=2709.56, stdev=157.65, samples=9 00:29:35.151 lat (usec) : 750=0.04%, 1000=0.27% 00:29:35.151 lat (msec) : 2=2.00%, 4=95.53%, 10=2.16% 00:29:35.151 cpu : usr=96.26%, sys=2.88%, ctx=67, majf=0, minf=9 00:29:35.151 IO depths : 1=0.7%, 2=7.7%, 4=62.8%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:35.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.151 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.151 issued rwts: total=13491,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:35.151 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:35.151 filename0: (groupid=0, jobs=1): err= 0: pid=4099044: Mon Jul 15 18:44:20 2024 00:29:35.151 read: IOPS=2715, BW=21.2MiB/s (22.2MB/s)(106MiB/5001msec) 00:29:35.151 slat (nsec): min=5970, max=70874, avg=14183.70, stdev=9281.86 00:29:35.151 clat (usec): min=710, max=6251, avg=2902.49, stdev=436.72 00:29:35.151 lat (usec): min=733, max=6271, avg=2916.68, stdev=437.74 00:29:35.151 clat percentiles (usec): 00:29:35.151 | 1.00th=[ 1893], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2606], 00:29:35.151 | 30.00th=[ 2737], 40.00th=[ 2835], 50.00th=[ 2900], 60.00th=[ 2966], 00:29:35.151 | 70.00th=[ 3064], 80.00th=[ 3195], 90.00th=[ 3425], 95.00th=[ 3621], 00:29:35.151 | 99.00th=[ 4228], 99.50th=[ 4424], 99.90th=[ 4883], 99.95th=[ 5080], 00:29:35.151 | 99.99th=[ 5800] 00:29:35.151 bw ( KiB/s): min=20464, max=24688, per=25.65%, avg=21871.56, stdev=1296.47, samples=9 00:29:35.151 iops : min= 2558, max= 3086, avg=2733.89, stdev=162.04, samples=9 00:29:35.151 lat (usec) : 750=0.01%, 1000=0.01% 00:29:35.151 lat (msec) : 2=1.58%, 4=97.02%, 10=1.38% 00:29:35.151 cpu : usr=95.00%, sys=3.38%, ctx=278, majf=0, minf=9 00:29:35.151 IO depths : 1=0.2%, 2=7.1%, 4=63.9%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:35.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.151 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.151 issued rwts: total=13581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:35.151 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:35.151 filename1: (groupid=0, jobs=1): err= 0: pid=4099045: Mon Jul 15 18:44:20 2024 00:29:35.151 read: IOPS=2678, BW=20.9MiB/s (21.9MB/s)(105MiB/5001msec) 00:29:35.151 slat (nsec): min=6027, max=64533, avg=12723.92, stdev=8357.36 00:29:35.151 clat (usec): min=672, max=5643, avg=2949.03, stdev=408.41 00:29:35.151 lat (usec): min=679, max=5659, avg=2961.75, stdev=408.86 00:29:35.152 clat percentiles (usec): 00:29:35.152 | 1.00th=[ 1926], 5.00th=[ 2278], 10.00th=[ 2474], 20.00th=[ 2704], 00:29:35.152 | 30.00th=[ 2802], 40.00th=[ 2868], 50.00th=[ 2933], 60.00th=[ 2999], 00:29:35.152 | 70.00th=[ 3097], 80.00th=[ 3228], 90.00th=[ 3425], 95.00th=[ 3621], 00:29:35.152 | 99.00th=[ 4113], 99.50th=[ 4359], 99.90th=[ 4817], 99.95th=[ 5014], 00:29:35.152 | 99.99th=[ 5604] 00:29:35.152 bw ( KiB/s): min=19888, max=22656, per=25.26%, avg=21541.33, stdev=926.76, samples=9 00:29:35.152 iops : min= 2486, max= 2832, avg=2692.67, stdev=115.84, samples=9 00:29:35.152 lat (usec) : 750=0.01%, 1000=0.04% 00:29:35.152 lat (msec) : 2=1.25%, 4=97.33%, 10=1.38% 00:29:35.152 cpu : usr=97.78%, sys=1.88%, ctx=9, majf=0, minf=9 00:29:35.152 IO depths : 1=0.3%, 2=5.7%, 4=66.0%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:35.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.152 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.152 issued rwts: total=13394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:35.152 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:35.152 filename1: (groupid=0, jobs=1): err= 0: pid=4099046: Mon Jul 15 18:44:20 2024 00:29:35.152 read: IOPS=2568, BW=20.1MiB/s (21.0MB/s)(100MiB/5001msec) 00:29:35.152 slat (nsec): min=6033, max=70717, avg=13332.41, stdev=8942.64 00:29:35.152 clat (usec): min=858, max=5896, avg=3076.65, stdev=455.65 00:29:35.152 lat (usec): min=869, max=5911, avg=3089.98, stdev=455.62 00:29:35.152 clat percentiles (usec): 00:29:35.152 | 1.00th=[ 2008], 5.00th=[ 2442], 10.00th=[ 2638], 20.00th=[ 2802], 00:29:35.152 | 30.00th=[ 2868], 40.00th=[ 2933], 50.00th=[ 2999], 60.00th=[ 3097], 00:29:35.152 | 70.00th=[ 3195], 80.00th=[ 3359], 90.00th=[ 3621], 95.00th=[ 3884], 00:29:35.152 | 99.00th=[ 4621], 99.50th=[ 4883], 99.90th=[ 5407], 99.95th=[ 5604], 00:29:35.152 | 99.99th=[ 5669] 00:29:35.152 bw ( KiB/s): min=19328, max=21904, per=24.18%, avg=20615.11, stdev=801.40, samples=9 00:29:35.152 iops : min= 2416, max= 2738, avg=2576.89, stdev=100.18, samples=9 00:29:35.152 lat (usec) : 1000=0.01% 00:29:35.152 lat (msec) : 2=0.92%, 4=95.36%, 10=3.71% 00:29:35.152 cpu : usr=97.32%, sys=2.32%, ctx=9, majf=0, minf=9 00:29:35.152 IO depths : 1=0.1%, 2=2.9%, 4=69.1%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:35.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.152 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.152 issued rwts: total=12843,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:35.152 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:35.152 00:29:35.152 Run status group 0 (all jobs): 00:29:35.152 READ: bw=83.3MiB/s (87.3MB/s), 20.1MiB/s-21.2MiB/s (21.0MB/s-22.2MB/s), io=416MiB (437MB), run=5001-5002msec 00:29:35.152 18:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:29:35.152 18:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:35.152 18:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:35.152 18:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:35.152 18:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:35.152 18:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:35.152 18:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.152 18:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.152 18:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.152 18:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:35.152 18:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.152 18:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.152 18:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.152 18:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:35.152 18:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:35.152 18:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:35.152 18:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:35.152 18:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.152 18:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.152 18:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.152 18:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:35.152 18:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.152 18:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.152 18:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.152 00:29:35.152 real 0m24.613s 00:29:35.152 user 4m52.584s 00:29:35.152 sys 0m4.285s 00:29:35.152 18:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:35.152 18:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.152 ************************************ 00:29:35.152 END TEST fio_dif_rand_params 00:29:35.152 ************************************ 00:29:35.152 18:44:20 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:35.152 18:44:20 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:29:35.152 18:44:20 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:35.152 18:44:20 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:35.152 18:44:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:35.152 ************************************ 00:29:35.152 START TEST fio_dif_digest 00:29:35.152 ************************************ 00:29:35.152 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:29:35.152 18:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:29:35.152 18:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:29:35.152 18:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:29:35.152 18:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:29:35.152 18:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:29:35.152 18:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:29:35.152 18:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:29:35.152 18:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:29:35.152 18:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:29:35.152 18:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:29:35.152 18:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:29:35.152 18:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:29:35.152 18:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:29:35.152 18:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:29:35.152 18:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:29:35.152 18:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:35.152 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.152 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:35.152 bdev_null0 00:29:35.152 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.152 18:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:35.152 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.152 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:35.152 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.152 18:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:35.152 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.152 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:35.152 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.152 18:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:35.152 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.152 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:35.152 [2024-07-15 18:44:20.703180] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:35.152 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:35.410 { 00:29:35.410 "params": { 00:29:35.410 "name": "Nvme$subsystem", 00:29:35.410 "trtype": "$TEST_TRANSPORT", 00:29:35.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.410 "adrfam": "ipv4", 00:29:35.410 "trsvcid": "$NVMF_PORT", 00:29:35.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.410 "hdgst": ${hdgst:-false}, 00:29:35.410 "ddgst": ${ddgst:-false} 00:29:35.410 }, 00:29:35.410 "method": "bdev_nvme_attach_controller" 00:29:35.410 } 00:29:35.410 EOF 00:29:35.410 )") 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:35.410 "params": { 00:29:35.410 "name": "Nvme0", 00:29:35.410 "trtype": "tcp", 00:29:35.410 "traddr": "10.0.0.2", 00:29:35.410 "adrfam": "ipv4", 00:29:35.410 "trsvcid": "4420", 00:29:35.410 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:35.410 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:35.410 "hdgst": true, 00:29:35.410 "ddgst": true 00:29:35.410 }, 00:29:35.410 "method": "bdev_nvme_attach_controller" 00:29:35.410 }' 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:35.410 18:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:35.679 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:35.679 ... 00:29:35.679 fio-3.35 00:29:35.679 Starting 3 threads 00:29:35.679 EAL: No free 2048 kB hugepages reported on node 1 00:29:47.880 00:29:47.880 filename0: (groupid=0, jobs=1): err= 0: pid=4100119: Mon Jul 15 18:44:31 2024 00:29:47.880 read: IOPS=298, BW=37.3MiB/s (39.2MB/s)(375MiB/10046msec) 00:29:47.880 slat (nsec): min=6237, max=48996, avg=14606.80, stdev=6744.94 00:29:47.880 clat (usec): min=7452, max=54798, avg=10011.95, stdev=1851.77 00:29:47.880 lat (usec): min=7464, max=54824, avg=10026.56, stdev=1851.74 00:29:47.880 clat percentiles (usec): 00:29:47.880 | 1.00th=[ 8356], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9372], 00:29:47.880 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:29:47.880 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10814], 95.00th=[11076], 00:29:47.880 | 99.00th=[11731], 99.50th=[11994], 99.90th=[50594], 99.95th=[53216], 00:29:47.880 | 99.99th=[54789] 00:29:47.880 bw ( KiB/s): min=34491, max=39424, per=35.27%, avg=38383.75, stdev=1071.86, samples=20 00:29:47.880 iops : min= 269, max= 308, avg=299.85, stdev= 8.46, samples=20 00:29:47.880 lat (msec) : 10=52.02%, 20=47.82%, 50=0.03%, 100=0.13% 00:29:47.880 cpu : usr=95.58%, sys=4.11%, ctx=21, majf=0, minf=187 00:29:47.880 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:47.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.880 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.880 issued rwts: total=3001,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.880 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:47.880 filename0: (groupid=0, jobs=1): err= 0: pid=4100120: Mon Jul 15 18:44:31 2024 00:29:47.880 read: IOPS=271, BW=34.0MiB/s (35.6MB/s)(341MiB/10045msec) 00:29:47.880 slat (nsec): min=6382, max=39723, avg=15832.41, stdev=6562.86 00:29:47.880 clat (usec): min=6893, max=48917, avg=11000.37, stdev=1249.11 00:29:47.880 lat (usec): min=6907, max=48938, avg=11016.20, stdev=1248.99 00:29:47.880 clat percentiles (usec): 00:29:47.880 | 1.00th=[ 9241], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10421], 00:29:47.880 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:29:47.880 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12256], 00:29:47.880 | 99.00th=[12911], 99.50th=[13173], 99.90th=[15926], 99.95th=[45876], 00:29:47.880 | 99.99th=[49021] 00:29:47.880 bw ( KiB/s): min=34048, max=35840, per=32.10%, avg=34931.20, stdev=528.41, samples=20 00:29:47.880 iops : min= 266, max= 280, avg=272.90, stdev= 4.13, samples=20 00:29:47.880 lat (msec) : 10=7.29%, 20=92.64%, 50=0.07% 00:29:47.880 cpu : usr=95.77%, sys=3.90%, ctx=24, majf=0, minf=143 00:29:47.880 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:47.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.880 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.880 issued rwts: total=2731,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.880 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:47.880 filename0: (groupid=0, jobs=1): err= 0: pid=4100121: Mon Jul 15 18:44:31 2024 00:29:47.880 read: IOPS=279, BW=34.9MiB/s (36.6MB/s)(351MiB/10046msec) 00:29:47.880 slat (nsec): min=6322, max=47494, avg=14433.84, stdev=6442.37 00:29:47.880 clat (usec): min=6626, max=48185, avg=10701.02, stdev=1232.52 00:29:47.880 lat (usec): min=6637, max=48216, avg=10715.45, stdev=1232.64 00:29:47.880 clat percentiles (usec): 00:29:47.880 | 1.00th=[ 8979], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10159], 00:29:47.880 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:29:47.880 | 70.00th=[11076], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 00:29:47.880 | 99.00th=[12518], 99.50th=[12780], 99.90th=[13960], 99.95th=[46400], 00:29:47.880 | 99.99th=[47973] 00:29:47.880 bw ( KiB/s): min=35072, max=36864, per=33.01%, avg=35916.80, stdev=492.08, samples=20 00:29:47.880 iops : min= 274, max= 288, avg=280.60, stdev= 3.84, samples=20 00:29:47.880 lat (msec) : 10=16.77%, 20=83.16%, 50=0.07% 00:29:47.880 cpu : usr=95.16%, sys=4.53%, ctx=22, majf=0, minf=112 00:29:47.880 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:47.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.880 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.880 issued rwts: total=2808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.880 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:47.880 00:29:47.880 Run status group 0 (all jobs): 00:29:47.880 READ: bw=106MiB/s (111MB/s), 34.0MiB/s-37.3MiB/s (35.6MB/s-39.2MB/s), io=1068MiB (1119MB), run=10045-10046msec 00:29:47.880 18:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:29:47.880 18:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:29:47.880 18:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:29:47.880 18:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:47.880 18:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:29:47.880 18:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:47.880 18:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.880 18:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:47.880 18:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.880 18:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:47.880 18:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.880 18:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:47.880 18:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.880 00:29:47.880 real 0m11.284s 00:29:47.880 user 0m35.658s 00:29:47.880 sys 0m1.531s 00:29:47.880 18:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:47.880 18:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:47.880 ************************************ 00:29:47.880 END TEST fio_dif_digest 00:29:47.880 ************************************ 00:29:47.880 18:44:31 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:47.880 18:44:31 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:29:47.880 18:44:31 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:29:47.880 18:44:31 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:47.880 18:44:31 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:29:47.880 18:44:31 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:47.880 18:44:31 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:29:47.880 18:44:31 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:47.880 18:44:31 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:47.880 rmmod nvme_tcp 00:29:47.880 rmmod nvme_fabrics 00:29:47.880 rmmod nvme_keyring 00:29:47.880 18:44:32 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:47.880 18:44:32 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:29:47.880 18:44:32 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:29:47.880 18:44:32 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 4091471 ']' 00:29:47.880 18:44:32 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 4091471 00:29:47.880 18:44:32 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 4091471 ']' 00:29:47.880 18:44:32 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 4091471 00:29:47.880 18:44:32 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:29:47.880 18:44:32 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:47.880 18:44:32 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4091471 00:29:47.880 18:44:32 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:47.880 18:44:32 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:47.880 18:44:32 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4091471' 00:29:47.880 killing process with pid 4091471 00:29:47.880 18:44:32 nvmf_dif -- common/autotest_common.sh@967 -- # kill 4091471 00:29:47.880 18:44:32 nvmf_dif -- common/autotest_common.sh@972 -- # wait 4091471 00:29:47.880 18:44:32 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:47.880 18:44:32 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:49.786 Waiting for block devices as requested 00:29:49.786 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:49.786 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:49.786 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:49.786 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:49.786 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:50.045 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:50.045 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:50.045 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:50.045 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:50.304 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:50.304 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:50.304 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:50.564 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:50.564 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:50.564 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:50.564 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:50.823 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:50.823 18:44:36 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:50.823 18:44:36 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:50.823 18:44:36 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:50.823 18:44:36 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:50.823 18:44:36 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.823 18:44:36 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:50.823 18:44:36 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.359 18:44:38 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:53.359 00:29:53.359 real 1m14.766s 00:29:53.359 user 7m12.077s 00:29:53.359 sys 0m18.741s 00:29:53.359 18:44:38 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:53.359 18:44:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:53.359 ************************************ 00:29:53.359 END TEST nvmf_dif 00:29:53.359 ************************************ 00:29:53.359 18:44:38 -- common/autotest_common.sh@1142 -- # return 0 00:29:53.359 18:44:38 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:53.359 18:44:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:53.359 18:44:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:53.359 18:44:38 -- common/autotest_common.sh@10 -- # set +x 00:29:53.359 ************************************ 00:29:53.359 START TEST nvmf_abort_qd_sizes 00:29:53.359 ************************************ 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:53.359 * Looking for test storage... 00:29:53.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:29:53.359 18:44:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:58.631 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:58.631 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:29:58.631 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:58.631 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:58.631 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:58.631 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:58.631 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:58.631 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:29:58.631 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:58.631 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:29:58.631 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:29:58.631 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:29:58.631 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:29:58.631 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:29:58.631 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:29:58.631 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:58.631 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:58.631 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:58.631 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:58.631 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:58.632 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:58.632 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:58.632 Found net devices under 0000:86:00.0: cvl_0_0 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:58.632 Found net devices under 0000:86:00.1: cvl_0_1 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:58.632 18:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:58.632 18:44:44 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:58.632 18:44:44 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:58.632 18:44:44 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:58.632 18:44:44 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:58.632 18:44:44 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:58.632 18:44:44 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:58.632 18:44:44 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:58.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:58.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:29:58.632 00:29:58.632 --- 10.0.0.2 ping statistics --- 00:29:58.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.632 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:29:58.632 18:44:44 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:58.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:58.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:29:58.891 00:29:58.891 --- 10.0.0.1 ping statistics --- 00:29:58.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.891 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:29:58.891 18:44:44 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:58.891 18:44:44 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:29:58.891 18:44:44 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:58.891 18:44:44 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:01.491 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:01.491 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:01.491 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:01.491 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:01.491 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:01.491 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:01.491 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:01.491 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:01.491 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:01.491 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:01.491 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:01.750 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:01.750 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:01.750 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:01.750 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:01.750 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:03.126 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:30:03.126 18:44:48 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:03.126 18:44:48 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:03.126 18:44:48 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:03.126 18:44:48 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:03.126 18:44:48 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:03.126 18:44:48 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:03.126 18:44:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:30:03.126 18:44:48 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:03.126 18:44:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:03.126 18:44:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:03.126 18:44:48 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=4108106 00:30:03.126 18:44:48 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 4108106 00:30:03.126 18:44:48 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:30:03.126 18:44:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 4108106 ']' 00:30:03.126 18:44:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:03.126 18:44:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:03.126 18:44:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:03.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:03.126 18:44:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:03.126 18:44:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:03.126 [2024-07-15 18:44:48.607495] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:30:03.126 [2024-07-15 18:44:48.607536] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:03.126 EAL: No free 2048 kB hugepages reported on node 1 00:30:03.126 [2024-07-15 18:44:48.664732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:03.384 [2024-07-15 18:44:48.747752] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:03.384 [2024-07-15 18:44:48.747789] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:03.384 [2024-07-15 18:44:48.747796] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:03.384 [2024-07-15 18:44:48.747802] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:03.384 [2024-07-15 18:44:48.747807] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:03.384 [2024-07-15 18:44:48.747849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:03.384 [2024-07-15 18:44:48.747978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:03.384 [2024-07-15 18:44:48.751372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:03.384 [2024-07-15 18:44:48.751373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.951 18:44:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:03.951 18:44:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:30:03.951 18:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:03.951 18:44:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:03.951 18:44:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:03.951 18:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:03.951 18:44:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:30:03.951 18:44:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:30:03.951 18:44:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:30:03.951 18:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:30:03.951 18:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:30:03.951 18:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:30:03.951 18:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:30:03.951 18:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:03.951 18:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:30:03.951 18:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:30:03.951 18:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:03.951 18:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:03.951 18:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:30:03.951 18:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:30:03.951 18:44:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:30:03.951 18:44:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:30:03.951 18:44:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:30:03.951 18:44:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:03.951 18:44:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:03.951 18:44:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:03.951 ************************************ 00:30:03.951 START TEST spdk_target_abort 00:30:03.951 ************************************ 00:30:03.951 18:44:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:30:03.951 18:44:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:30:03.951 18:44:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:30:03.951 18:44:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.951 18:44:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:07.234 spdk_targetn1 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:07.234 [2024-07-15 18:44:52.331675] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:07.234 [2024-07-15 18:44:52.360479] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:07.234 18:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:07.234 EAL: No free 2048 kB hugepages reported on node 1 00:30:10.518 Initializing NVMe Controllers 00:30:10.519 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:10.519 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:10.519 Initialization complete. Launching workers. 00:30:10.519 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16822, failed: 0 00:30:10.519 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1310, failed to submit 15512 00:30:10.519 success 747, unsuccess 563, failed 0 00:30:10.519 18:44:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:10.519 18:44:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:10.519 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.803 Initializing NVMe Controllers 00:30:13.803 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:13.803 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:13.803 Initialization complete. Launching workers. 00:30:13.803 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8673, failed: 0 00:30:13.803 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1225, failed to submit 7448 00:30:13.803 success 301, unsuccess 924, failed 0 00:30:13.803 18:44:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:13.803 18:44:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:13.803 EAL: No free 2048 kB hugepages reported on node 1 00:30:17.086 Initializing NVMe Controllers 00:30:17.086 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:17.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:17.086 Initialization complete. Launching workers. 00:30:17.086 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 39234, failed: 0 00:30:17.086 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2868, failed to submit 36366 00:30:17.086 success 580, unsuccess 2288, failed 0 00:30:17.086 18:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:30:17.086 18:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.086 18:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:17.086 18:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.086 18:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:30:17.086 18:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.086 18:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:18.485 18:45:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.485 18:45:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 4108106 00:30:18.485 18:45:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 4108106 ']' 00:30:18.485 18:45:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 4108106 00:30:18.485 18:45:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:30:18.485 18:45:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:18.485 18:45:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4108106 00:30:18.485 18:45:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:18.485 18:45:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:18.485 18:45:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4108106' 00:30:18.485 killing process with pid 4108106 00:30:18.485 18:45:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 4108106 00:30:18.485 18:45:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 4108106 00:30:18.743 00:30:18.743 real 0m14.651s 00:30:18.743 user 0m58.460s 00:30:18.743 sys 0m2.239s 00:30:18.743 18:45:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:18.743 18:45:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:18.743 ************************************ 00:30:18.743 END TEST spdk_target_abort 00:30:18.743 ************************************ 00:30:18.743 18:45:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:30:18.743 18:45:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:30:18.743 18:45:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:18.743 18:45:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:18.743 18:45:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:18.743 ************************************ 00:30:18.743 START TEST kernel_target_abort 00:30:18.743 ************************************ 00:30:18.743 18:45:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:30:18.743 18:45:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:30:18.743 18:45:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:30:18.743 18:45:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:18.743 18:45:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:18.744 18:45:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:18.744 18:45:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:18.744 18:45:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:18.744 18:45:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:18.744 18:45:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:18.744 18:45:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:18.744 18:45:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:18.744 18:45:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:18.744 18:45:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:18.744 18:45:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:30:18.744 18:45:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:18.744 18:45:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:18.744 18:45:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:18.744 18:45:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:30:18.744 18:45:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:30:18.744 18:45:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:30:18.744 18:45:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:18.744 18:45:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:21.277 Waiting for block devices as requested 00:30:21.537 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:30:21.537 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:21.537 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:21.796 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:21.796 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:21.796 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:22.054 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:22.054 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:22.054 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:22.313 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:22.313 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:22.313 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:22.313 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:22.572 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:22.572 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:22.572 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:22.830 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:22.830 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:22.830 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:22.830 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:30:22.830 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:30:22.830 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:22.830 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:22.830 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:30:22.830 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:22.830 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:22.830 No valid GPT data, bailing 00:30:22.830 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:22.830 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:22.830 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:22.830 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:30:22.830 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:30:22.830 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:22.830 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:22.830 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:22.830 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:22.830 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:30:22.830 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:30:22.830 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:30:22.830 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:30:22.830 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:30:22.830 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:30:22.830 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:30:22.830 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:22.830 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:30:23.088 00:30:23.088 Discovery Log Number of Records 2, Generation counter 2 00:30:23.088 =====Discovery Log Entry 0====== 00:30:23.088 trtype: tcp 00:30:23.088 adrfam: ipv4 00:30:23.088 subtype: current discovery subsystem 00:30:23.088 treq: not specified, sq flow control disable supported 00:30:23.088 portid: 1 00:30:23.088 trsvcid: 4420 00:30:23.088 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:23.088 traddr: 10.0.0.1 00:30:23.088 eflags: none 00:30:23.088 sectype: none 00:30:23.088 =====Discovery Log Entry 1====== 00:30:23.088 trtype: tcp 00:30:23.088 adrfam: ipv4 00:30:23.088 subtype: nvme subsystem 00:30:23.088 treq: not specified, sq flow control disable supported 00:30:23.088 portid: 1 00:30:23.088 trsvcid: 4420 00:30:23.088 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:23.088 traddr: 10.0.0.1 00:30:23.088 eflags: none 00:30:23.088 sectype: none 00:30:23.088 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:30:23.088 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:23.088 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:23.088 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:30:23.088 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:23.088 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:23.088 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:23.088 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:23.088 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:23.088 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:23.088 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:23.088 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:23.088 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:23.088 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:23.088 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:30:23.088 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:23.088 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:30:23.088 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:23.088 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:23.088 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:23.088 18:45:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:23.088 EAL: No free 2048 kB hugepages reported on node 1 00:30:26.370 Initializing NVMe Controllers 00:30:26.370 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:26.370 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:26.370 Initialization complete. Launching workers. 00:30:26.370 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 93130, failed: 0 00:30:26.371 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 93130, failed to submit 0 00:30:26.371 success 0, unsuccess 93130, failed 0 00:30:26.371 18:45:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:26.371 18:45:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:26.371 EAL: No free 2048 kB hugepages reported on node 1 00:30:29.682 Initializing NVMe Controllers 00:30:29.682 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:29.682 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:29.682 Initialization complete. Launching workers. 00:30:29.682 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 150605, failed: 0 00:30:29.682 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37694, failed to submit 112911 00:30:29.682 success 0, unsuccess 37694, failed 0 00:30:29.682 18:45:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:29.682 18:45:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:29.682 EAL: No free 2048 kB hugepages reported on node 1 00:30:32.212 Initializing NVMe Controllers 00:30:32.212 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:32.212 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:32.212 Initialization complete. Launching workers. 00:30:32.212 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145605, failed: 0 00:30:32.212 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36446, failed to submit 109159 00:30:32.212 success 0, unsuccess 36446, failed 0 00:30:32.212 18:45:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:30:32.212 18:45:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:32.212 18:45:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:30:32.212 18:45:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:32.212 18:45:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:32.212 18:45:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:32.212 18:45:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:32.212 18:45:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:30:32.212 18:45:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:30:32.471 18:45:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:35.056 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:35.056 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:35.056 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:35.056 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:35.056 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:35.056 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:35.056 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:35.056 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:35.056 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:35.056 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:35.056 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:35.342 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:35.342 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:35.342 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:35.342 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:35.342 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:36.718 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:30:36.718 00:30:36.718 real 0m17.923s 00:30:36.718 user 0m9.042s 00:30:36.718 sys 0m4.857s 00:30:36.718 18:45:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:36.718 18:45:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:36.718 ************************************ 00:30:36.718 END TEST kernel_target_abort 00:30:36.718 ************************************ 00:30:36.718 18:45:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:30:36.718 18:45:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:36.718 18:45:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:30:36.718 18:45:22 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:36.718 18:45:22 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:30:36.718 18:45:22 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:36.718 18:45:22 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:30:36.718 18:45:22 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:36.718 18:45:22 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:36.718 rmmod nvme_tcp 00:30:36.718 rmmod nvme_fabrics 00:30:36.718 rmmod nvme_keyring 00:30:36.718 18:45:22 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:36.718 18:45:22 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:30:36.718 18:45:22 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:30:36.718 18:45:22 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 4108106 ']' 00:30:36.718 18:45:22 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 4108106 00:30:36.718 18:45:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 4108106 ']' 00:30:36.718 18:45:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 4108106 00:30:36.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (4108106) - No such process 00:30:36.718 18:45:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 4108106 is not found' 00:30:36.718 Process with pid 4108106 is not found 00:30:36.718 18:45:22 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:36.718 18:45:22 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:40.004 Waiting for block devices as requested 00:30:40.004 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:30:40.004 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:40.004 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:40.004 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:40.004 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:40.004 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:40.004 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:40.004 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:40.263 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:40.263 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:40.263 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:40.521 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:40.521 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:40.521 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:40.521 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:40.780 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:40.780 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:40.780 18:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:40.780 18:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:40.780 18:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:40.780 18:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:40.780 18:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.780 18:45:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:40.780 18:45:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.312 18:45:28 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:43.312 00:30:43.312 real 0m49.953s 00:30:43.312 user 1m11.681s 00:30:43.312 sys 0m15.711s 00:30:43.312 18:45:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:43.312 18:45:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:43.312 ************************************ 00:30:43.312 END TEST nvmf_abort_qd_sizes 00:30:43.312 ************************************ 00:30:43.312 18:45:28 -- common/autotest_common.sh@1142 -- # return 0 00:30:43.312 18:45:28 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:30:43.312 18:45:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:43.312 18:45:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:43.312 18:45:28 -- common/autotest_common.sh@10 -- # set +x 00:30:43.312 ************************************ 00:30:43.312 START TEST keyring_file 00:30:43.312 ************************************ 00:30:43.312 18:45:28 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:30:43.312 * Looking for test storage... 00:30:43.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:30:43.312 18:45:28 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:30:43.312 18:45:28 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:43.312 18:45:28 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:43.312 18:45:28 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:43.312 18:45:28 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:43.312 18:45:28 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.312 18:45:28 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.312 18:45:28 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.312 18:45:28 keyring_file -- paths/export.sh@5 -- # export PATH 00:30:43.312 18:45:28 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@47 -- # : 0 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:43.312 18:45:28 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:43.312 18:45:28 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:43.312 18:45:28 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:43.312 18:45:28 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:30:43.312 18:45:28 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:30:43.312 18:45:28 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:30:43.312 18:45:28 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:43.312 18:45:28 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:43.312 18:45:28 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:43.312 18:45:28 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:43.312 18:45:28 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:43.312 18:45:28 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:43.312 18:45:28 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.U0P4rMXrT0 00:30:43.312 18:45:28 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:43.312 18:45:28 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.U0P4rMXrT0 00:30:43.312 18:45:28 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.U0P4rMXrT0 00:30:43.312 18:45:28 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.U0P4rMXrT0 00:30:43.312 18:45:28 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:30:43.312 18:45:28 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:43.312 18:45:28 keyring_file -- keyring/common.sh@17 -- # name=key1 00:30:43.312 18:45:28 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:43.312 18:45:28 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:43.312 18:45:28 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:43.312 18:45:28 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.naj66PxE25 00:30:43.312 18:45:28 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:43.312 18:45:28 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:43.312 18:45:28 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.naj66PxE25 00:30:43.312 18:45:28 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.naj66PxE25 00:30:43.312 18:45:28 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.naj66PxE25 00:30:43.312 18:45:28 keyring_file -- keyring/file.sh@30 -- # tgtpid=4117619 00:30:43.312 18:45:28 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:30:43.312 18:45:28 keyring_file -- keyring/file.sh@32 -- # waitforlisten 4117619 00:30:43.312 18:45:28 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 4117619 ']' 00:30:43.312 18:45:28 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:43.312 18:45:28 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:43.312 18:45:28 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:43.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:43.312 18:45:28 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:43.312 18:45:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:43.312 [2024-07-15 18:45:28.714787] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:30:43.312 [2024-07-15 18:45:28.714836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4117619 ] 00:30:43.312 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.312 [2024-07-15 18:45:28.782893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.312 [2024-07-15 18:45:28.860203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.247 18:45:29 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:44.247 18:45:29 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:44.247 18:45:29 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:30:44.247 18:45:29 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.247 18:45:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:44.247 [2024-07-15 18:45:29.520375] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:44.247 null0 00:30:44.247 [2024-07-15 18:45:29.552426] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:44.247 [2024-07-15 18:45:29.552736] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:44.247 [2024-07-15 18:45:29.560436] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:44.247 18:45:29 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.247 18:45:29 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:44.247 18:45:29 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:44.247 18:45:29 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:44.247 18:45:29 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:44.247 18:45:29 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:44.247 18:45:29 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:44.247 18:45:29 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:44.247 18:45:29 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:44.247 18:45:29 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.247 18:45:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:44.247 [2024-07-15 18:45:29.572468] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:30:44.247 request: 00:30:44.247 { 00:30:44.247 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:30:44.247 "secure_channel": false, 00:30:44.247 "listen_address": { 00:30:44.247 "trtype": "tcp", 00:30:44.247 "traddr": "127.0.0.1", 00:30:44.247 "trsvcid": "4420" 00:30:44.247 }, 00:30:44.247 "method": "nvmf_subsystem_add_listener", 00:30:44.247 "req_id": 1 00:30:44.247 } 00:30:44.247 Got JSON-RPC error response 00:30:44.247 response: 00:30:44.247 { 00:30:44.247 "code": -32602, 00:30:44.247 "message": "Invalid parameters" 00:30:44.247 } 00:30:44.247 18:45:29 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:44.247 18:45:29 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:44.247 18:45:29 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:44.247 18:45:29 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:44.247 18:45:29 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:44.247 18:45:29 keyring_file -- keyring/file.sh@46 -- # bperfpid=4117729 00:30:44.247 18:45:29 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:30:44.247 18:45:29 keyring_file -- keyring/file.sh@48 -- # waitforlisten 4117729 /var/tmp/bperf.sock 00:30:44.247 18:45:29 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 4117729 ']' 00:30:44.247 18:45:29 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:44.247 18:45:29 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:44.247 18:45:29 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:44.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:44.247 18:45:29 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:44.247 18:45:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:44.247 [2024-07-15 18:45:29.626149] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:30:44.247 [2024-07-15 18:45:29.626193] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4117729 ] 00:30:44.247 EAL: No free 2048 kB hugepages reported on node 1 00:30:44.247 [2024-07-15 18:45:29.693167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.247 [2024-07-15 18:45:29.774088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.180 18:45:30 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:45.180 18:45:30 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:45.180 18:45:30 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.U0P4rMXrT0 00:30:45.180 18:45:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.U0P4rMXrT0 00:30:45.180 18:45:30 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.naj66PxE25 00:30:45.180 18:45:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.naj66PxE25 00:30:45.438 18:45:30 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:30:45.438 18:45:30 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:30:45.438 18:45:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:45.438 18:45:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:45.438 18:45:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:45.438 18:45:30 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.U0P4rMXrT0 == \/\t\m\p\/\t\m\p\.\U\0\P\4\r\M\X\r\T\0 ]] 00:30:45.438 18:45:30 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:30:45.438 18:45:30 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:30:45.438 18:45:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:45.438 18:45:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:45.438 18:45:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:45.696 18:45:31 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.naj66PxE25 == \/\t\m\p\/\t\m\p\.\n\a\j\6\6\P\x\E\2\5 ]] 00:30:45.696 18:45:31 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:30:45.696 18:45:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:45.696 18:45:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:45.696 18:45:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:45.696 18:45:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:45.696 18:45:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:45.954 18:45:31 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:30:45.954 18:45:31 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:30:45.954 18:45:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:45.954 18:45:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:45.954 18:45:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:45.954 18:45:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:45.954 18:45:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:46.212 18:45:31 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:30:46.212 18:45:31 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:46.212 18:45:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:46.212 [2024-07-15 18:45:31.670969] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:46.212 nvme0n1 00:30:46.212 18:45:31 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:30:46.212 18:45:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:46.212 18:45:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:46.212 18:45:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:46.212 18:45:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:46.212 18:45:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:46.470 18:45:31 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:30:46.470 18:45:31 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:30:46.470 18:45:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:46.470 18:45:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:46.470 18:45:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:46.470 18:45:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:46.470 18:45:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:46.729 18:45:32 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:30:46.729 18:45:32 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:46.729 Running I/O for 1 seconds... 00:30:47.663 00:30:47.663 Latency(us) 00:30:47.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:47.663 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:30:47.663 nvme0n1 : 1.00 19221.92 75.09 0.00 0.00 6644.19 3432.84 11609.23 00:30:47.663 =================================================================================================================== 00:30:47.663 Total : 19221.92 75.09 0.00 0.00 6644.19 3432.84 11609.23 00:30:47.663 0 00:30:47.926 18:45:33 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:47.926 18:45:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:47.926 18:45:33 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:30:47.926 18:45:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:47.926 18:45:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:47.926 18:45:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:47.926 18:45:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:47.926 18:45:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:48.184 18:45:33 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:30:48.184 18:45:33 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:30:48.184 18:45:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:48.184 18:45:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:48.184 18:45:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:48.184 18:45:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:48.184 18:45:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:48.442 18:45:33 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:30:48.442 18:45:33 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:48.442 18:45:33 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:48.442 18:45:33 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:48.442 18:45:33 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:48.442 18:45:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:48.442 18:45:33 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:48.442 18:45:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:48.442 18:45:33 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:48.442 18:45:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:48.442 [2024-07-15 18:45:33.925683] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:48.442 [2024-07-15 18:45:33.925877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e4770 (107): Transport endpoint is not connected 00:30:48.442 [2024-07-15 18:45:33.926871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e4770 (9): Bad file descriptor 00:30:48.442 [2024-07-15 18:45:33.927872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:48.442 [2024-07-15 18:45:33.927883] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:48.442 [2024-07-15 18:45:33.927890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:48.442 request: 00:30:48.442 { 00:30:48.442 "name": "nvme0", 00:30:48.442 "trtype": "tcp", 00:30:48.442 "traddr": "127.0.0.1", 00:30:48.442 "adrfam": "ipv4", 00:30:48.442 "trsvcid": "4420", 00:30:48.442 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:48.442 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:48.442 "prchk_reftag": false, 00:30:48.442 "prchk_guard": false, 00:30:48.442 "hdgst": false, 00:30:48.442 "ddgst": false, 00:30:48.442 "psk": "key1", 00:30:48.442 "method": "bdev_nvme_attach_controller", 00:30:48.442 "req_id": 1 00:30:48.442 } 00:30:48.442 Got JSON-RPC error response 00:30:48.442 response: 00:30:48.442 { 00:30:48.442 "code": -5, 00:30:48.442 "message": "Input/output error" 00:30:48.442 } 00:30:48.442 18:45:33 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:48.442 18:45:33 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:48.442 18:45:33 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:48.442 18:45:33 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:48.442 18:45:33 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:30:48.442 18:45:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:48.442 18:45:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:48.442 18:45:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:48.442 18:45:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:48.442 18:45:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:48.699 18:45:34 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:30:48.699 18:45:34 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:30:48.699 18:45:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:48.699 18:45:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:48.699 18:45:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:48.699 18:45:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:48.699 18:45:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:48.957 18:45:34 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:30:48.957 18:45:34 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:30:48.957 18:45:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:48.957 18:45:34 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:30:48.957 18:45:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:30:49.215 18:45:34 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:30:49.215 18:45:34 keyring_file -- keyring/file.sh@77 -- # jq length 00:30:49.215 18:45:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:49.473 18:45:34 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:30:49.474 18:45:34 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.U0P4rMXrT0 00:30:49.474 18:45:34 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.U0P4rMXrT0 00:30:49.474 18:45:34 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:49.474 18:45:34 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.U0P4rMXrT0 00:30:49.474 18:45:34 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:49.474 18:45:34 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:49.474 18:45:34 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:49.474 18:45:34 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:49.474 18:45:34 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.U0P4rMXrT0 00:30:49.474 18:45:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.U0P4rMXrT0 00:30:49.474 [2024-07-15 18:45:34.964914] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.U0P4rMXrT0': 0100660 00:30:49.474 [2024-07-15 18:45:34.964938] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:30:49.474 request: 00:30:49.474 { 00:30:49.474 "name": "key0", 00:30:49.474 "path": "/tmp/tmp.U0P4rMXrT0", 00:30:49.474 "method": "keyring_file_add_key", 00:30:49.474 "req_id": 1 00:30:49.474 } 00:30:49.474 Got JSON-RPC error response 00:30:49.474 response: 00:30:49.474 { 00:30:49.474 "code": -1, 00:30:49.474 "message": "Operation not permitted" 00:30:49.474 } 00:30:49.474 18:45:34 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:49.474 18:45:34 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:49.474 18:45:34 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:49.474 18:45:34 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:49.474 18:45:34 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.U0P4rMXrT0 00:30:49.474 18:45:34 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.U0P4rMXrT0 00:30:49.474 18:45:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.U0P4rMXrT0 00:30:49.730 18:45:35 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.U0P4rMXrT0 00:30:49.730 18:45:35 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:30:49.730 18:45:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:49.730 18:45:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:49.730 18:45:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:49.730 18:45:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:49.730 18:45:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:49.989 18:45:35 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:30:49.989 18:45:35 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:49.989 18:45:35 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:49.989 18:45:35 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:49.989 18:45:35 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:49.989 18:45:35 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:49.989 18:45:35 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:49.989 18:45:35 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:49.989 18:45:35 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:49.989 18:45:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:49.989 [2024-07-15 18:45:35.518388] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.U0P4rMXrT0': No such file or directory 00:30:49.989 [2024-07-15 18:45:35.518409] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:30:49.989 [2024-07-15 18:45:35.518430] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:30:49.989 [2024-07-15 18:45:35.518436] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:49.989 [2024-07-15 18:45:35.518443] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:30:49.989 request: 00:30:49.989 { 00:30:49.989 "name": "nvme0", 00:30:49.989 "trtype": "tcp", 00:30:49.989 "traddr": "127.0.0.1", 00:30:49.989 "adrfam": "ipv4", 00:30:49.989 "trsvcid": "4420", 00:30:49.989 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:49.989 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:49.989 "prchk_reftag": false, 00:30:49.989 "prchk_guard": false, 00:30:49.989 "hdgst": false, 00:30:49.989 "ddgst": false, 00:30:49.989 "psk": "key0", 00:30:49.989 "method": "bdev_nvme_attach_controller", 00:30:49.989 "req_id": 1 00:30:49.989 } 00:30:49.989 Got JSON-RPC error response 00:30:49.989 response: 00:30:49.989 { 00:30:49.989 "code": -19, 00:30:49.989 "message": "No such device" 00:30:49.989 } 00:30:49.989 18:45:35 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:49.989 18:45:35 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:49.989 18:45:35 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:49.989 18:45:35 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:49.989 18:45:35 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:30:49.989 18:45:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:50.246 18:45:35 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:50.246 18:45:35 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:50.246 18:45:35 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:50.246 18:45:35 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:50.246 18:45:35 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:50.246 18:45:35 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:50.246 18:45:35 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.HMNnFdUcHZ 00:30:50.246 18:45:35 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:50.246 18:45:35 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:50.246 18:45:35 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:50.246 18:45:35 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:50.246 18:45:35 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:50.246 18:45:35 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:50.246 18:45:35 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:50.246 18:45:35 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.HMNnFdUcHZ 00:30:50.246 18:45:35 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.HMNnFdUcHZ 00:30:50.246 18:45:35 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.HMNnFdUcHZ 00:30:50.246 18:45:35 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HMNnFdUcHZ 00:30:50.246 18:45:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HMNnFdUcHZ 00:30:50.504 18:45:35 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:50.504 18:45:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:50.762 nvme0n1 00:30:50.762 18:45:36 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:30:50.762 18:45:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:50.762 18:45:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:50.762 18:45:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:50.762 18:45:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:50.762 18:45:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:51.019 18:45:36 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:30:51.019 18:45:36 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:30:51.019 18:45:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:51.019 18:45:36 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:30:51.019 18:45:36 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:30:51.019 18:45:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:51.019 18:45:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:51.019 18:45:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:51.278 18:45:36 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:30:51.278 18:45:36 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:30:51.278 18:45:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:51.278 18:45:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:51.278 18:45:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:51.278 18:45:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:51.278 18:45:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:51.536 18:45:36 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:30:51.536 18:45:36 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:51.536 18:45:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:51.536 18:45:37 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:30:51.536 18:45:37 keyring_file -- keyring/file.sh@104 -- # jq length 00:30:51.536 18:45:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:51.794 18:45:37 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:30:51.794 18:45:37 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HMNnFdUcHZ 00:30:51.794 18:45:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HMNnFdUcHZ 00:30:52.052 18:45:37 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.naj66PxE25 00:30:52.052 18:45:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.naj66PxE25 00:30:52.052 18:45:37 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:52.052 18:45:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:52.310 nvme0n1 00:30:52.310 18:45:37 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:30:52.310 18:45:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:30:52.569 18:45:38 keyring_file -- keyring/file.sh@112 -- # config='{ 00:30:52.569 "subsystems": [ 00:30:52.569 { 00:30:52.569 "subsystem": "keyring", 00:30:52.569 "config": [ 00:30:52.569 { 00:30:52.569 "method": "keyring_file_add_key", 00:30:52.569 "params": { 00:30:52.569 "name": "key0", 00:30:52.569 "path": "/tmp/tmp.HMNnFdUcHZ" 00:30:52.569 } 00:30:52.569 }, 00:30:52.569 { 00:30:52.569 "method": "keyring_file_add_key", 00:30:52.569 "params": { 00:30:52.569 "name": "key1", 00:30:52.569 "path": "/tmp/tmp.naj66PxE25" 00:30:52.569 } 00:30:52.569 } 00:30:52.569 ] 00:30:52.569 }, 00:30:52.569 { 00:30:52.569 "subsystem": "iobuf", 00:30:52.569 "config": [ 00:30:52.569 { 00:30:52.569 "method": "iobuf_set_options", 00:30:52.569 "params": { 00:30:52.569 "small_pool_count": 8192, 00:30:52.569 "large_pool_count": 1024, 00:30:52.569 "small_bufsize": 8192, 00:30:52.569 "large_bufsize": 135168 00:30:52.569 } 00:30:52.569 } 00:30:52.569 ] 00:30:52.569 }, 00:30:52.569 { 00:30:52.569 "subsystem": "sock", 00:30:52.569 "config": [ 00:30:52.569 { 00:30:52.569 "method": "sock_set_default_impl", 00:30:52.569 "params": { 00:30:52.569 "impl_name": "posix" 00:30:52.569 } 00:30:52.569 }, 00:30:52.569 { 00:30:52.569 "method": "sock_impl_set_options", 00:30:52.569 "params": { 00:30:52.569 "impl_name": "ssl", 00:30:52.569 "recv_buf_size": 4096, 00:30:52.569 "send_buf_size": 4096, 00:30:52.569 "enable_recv_pipe": true, 00:30:52.569 "enable_quickack": false, 00:30:52.569 "enable_placement_id": 0, 00:30:52.569 "enable_zerocopy_send_server": true, 00:30:52.569 "enable_zerocopy_send_client": false, 00:30:52.569 "zerocopy_threshold": 0, 00:30:52.569 "tls_version": 0, 00:30:52.569 "enable_ktls": false 00:30:52.569 } 00:30:52.569 }, 00:30:52.569 { 00:30:52.569 "method": "sock_impl_set_options", 00:30:52.569 "params": { 00:30:52.569 "impl_name": "posix", 00:30:52.569 "recv_buf_size": 2097152, 00:30:52.569 "send_buf_size": 2097152, 00:30:52.569 "enable_recv_pipe": true, 00:30:52.569 "enable_quickack": false, 00:30:52.569 "enable_placement_id": 0, 00:30:52.569 "enable_zerocopy_send_server": true, 00:30:52.569 "enable_zerocopy_send_client": false, 00:30:52.569 "zerocopy_threshold": 0, 00:30:52.569 "tls_version": 0, 00:30:52.569 "enable_ktls": false 00:30:52.569 } 00:30:52.569 } 00:30:52.569 ] 00:30:52.569 }, 00:30:52.569 { 00:30:52.569 "subsystem": "vmd", 00:30:52.569 "config": [] 00:30:52.569 }, 00:30:52.569 { 00:30:52.569 "subsystem": "accel", 00:30:52.569 "config": [ 00:30:52.569 { 00:30:52.569 "method": "accel_set_options", 00:30:52.569 "params": { 00:30:52.569 "small_cache_size": 128, 00:30:52.569 "large_cache_size": 16, 00:30:52.569 "task_count": 2048, 00:30:52.569 "sequence_count": 2048, 00:30:52.569 "buf_count": 2048 00:30:52.569 } 00:30:52.569 } 00:30:52.569 ] 00:30:52.569 }, 00:30:52.569 { 00:30:52.569 "subsystem": "bdev", 00:30:52.569 "config": [ 00:30:52.569 { 00:30:52.569 "method": "bdev_set_options", 00:30:52.569 "params": { 00:30:52.569 "bdev_io_pool_size": 65535, 00:30:52.569 "bdev_io_cache_size": 256, 00:30:52.569 "bdev_auto_examine": true, 00:30:52.569 "iobuf_small_cache_size": 128, 00:30:52.569 "iobuf_large_cache_size": 16 00:30:52.569 } 00:30:52.569 }, 00:30:52.569 { 00:30:52.569 "method": "bdev_raid_set_options", 00:30:52.569 "params": { 00:30:52.569 "process_window_size_kb": 1024 00:30:52.569 } 00:30:52.569 }, 00:30:52.569 { 00:30:52.569 "method": "bdev_iscsi_set_options", 00:30:52.569 "params": { 00:30:52.569 "timeout_sec": 30 00:30:52.569 } 00:30:52.569 }, 00:30:52.569 { 00:30:52.569 "method": "bdev_nvme_set_options", 00:30:52.569 "params": { 00:30:52.569 "action_on_timeout": "none", 00:30:52.569 "timeout_us": 0, 00:30:52.569 "timeout_admin_us": 0, 00:30:52.569 "keep_alive_timeout_ms": 10000, 00:30:52.569 "arbitration_burst": 0, 00:30:52.569 "low_priority_weight": 0, 00:30:52.569 "medium_priority_weight": 0, 00:30:52.569 "high_priority_weight": 0, 00:30:52.569 "nvme_adminq_poll_period_us": 10000, 00:30:52.569 "nvme_ioq_poll_period_us": 0, 00:30:52.569 "io_queue_requests": 512, 00:30:52.569 "delay_cmd_submit": true, 00:30:52.569 "transport_retry_count": 4, 00:30:52.569 "bdev_retry_count": 3, 00:30:52.569 "transport_ack_timeout": 0, 00:30:52.569 "ctrlr_loss_timeout_sec": 0, 00:30:52.569 "reconnect_delay_sec": 0, 00:30:52.569 "fast_io_fail_timeout_sec": 0, 00:30:52.569 "disable_auto_failback": false, 00:30:52.570 "generate_uuids": false, 00:30:52.570 "transport_tos": 0, 00:30:52.570 "nvme_error_stat": false, 00:30:52.570 "rdma_srq_size": 0, 00:30:52.570 "io_path_stat": false, 00:30:52.570 "allow_accel_sequence": false, 00:30:52.570 "rdma_max_cq_size": 0, 00:30:52.570 "rdma_cm_event_timeout_ms": 0, 00:30:52.570 "dhchap_digests": [ 00:30:52.570 "sha256", 00:30:52.570 "sha384", 00:30:52.570 "sha512" 00:30:52.570 ], 00:30:52.570 "dhchap_dhgroups": [ 00:30:52.570 "null", 00:30:52.570 "ffdhe2048", 00:30:52.570 "ffdhe3072", 00:30:52.570 "ffdhe4096", 00:30:52.570 "ffdhe6144", 00:30:52.570 "ffdhe8192" 00:30:52.570 ] 00:30:52.570 } 00:30:52.570 }, 00:30:52.570 { 00:30:52.570 "method": "bdev_nvme_attach_controller", 00:30:52.570 "params": { 00:30:52.570 "name": "nvme0", 00:30:52.570 "trtype": "TCP", 00:30:52.570 "adrfam": "IPv4", 00:30:52.570 "traddr": "127.0.0.1", 00:30:52.570 "trsvcid": "4420", 00:30:52.570 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:52.570 "prchk_reftag": false, 00:30:52.570 "prchk_guard": false, 00:30:52.570 "ctrlr_loss_timeout_sec": 0, 00:30:52.570 "reconnect_delay_sec": 0, 00:30:52.570 "fast_io_fail_timeout_sec": 0, 00:30:52.570 "psk": "key0", 00:30:52.570 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:52.570 "hdgst": false, 00:30:52.570 "ddgst": false 00:30:52.570 } 00:30:52.570 }, 00:30:52.570 { 00:30:52.570 "method": "bdev_nvme_set_hotplug", 00:30:52.570 "params": { 00:30:52.570 "period_us": 100000, 00:30:52.570 "enable": false 00:30:52.570 } 00:30:52.570 }, 00:30:52.570 { 00:30:52.570 "method": "bdev_wait_for_examine" 00:30:52.570 } 00:30:52.570 ] 00:30:52.570 }, 00:30:52.570 { 00:30:52.570 "subsystem": "nbd", 00:30:52.570 "config": [] 00:30:52.570 } 00:30:52.570 ] 00:30:52.570 }' 00:30:52.570 18:45:38 keyring_file -- keyring/file.sh@114 -- # killprocess 4117729 00:30:52.570 18:45:38 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 4117729 ']' 00:30:52.570 18:45:38 keyring_file -- common/autotest_common.sh@952 -- # kill -0 4117729 00:30:52.570 18:45:38 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:52.570 18:45:38 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:52.570 18:45:38 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4117729 00:30:52.570 18:45:38 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:52.570 18:45:38 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:52.570 18:45:38 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4117729' 00:30:52.570 killing process with pid 4117729 00:30:52.570 18:45:38 keyring_file -- common/autotest_common.sh@967 -- # kill 4117729 00:30:52.570 Received shutdown signal, test time was about 1.000000 seconds 00:30:52.570 00:30:52.570 Latency(us) 00:30:52.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.570 =================================================================================================================== 00:30:52.570 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:52.570 18:45:38 keyring_file -- common/autotest_common.sh@972 -- # wait 4117729 00:30:52.829 18:45:38 keyring_file -- keyring/file.sh@117 -- # bperfpid=4119239 00:30:52.829 18:45:38 keyring_file -- keyring/file.sh@119 -- # waitforlisten 4119239 /var/tmp/bperf.sock 00:30:52.829 18:45:38 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 4119239 ']' 00:30:52.829 18:45:38 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:52.829 18:45:38 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:30:52.829 18:45:38 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:52.829 18:45:38 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:52.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:52.829 18:45:38 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:30:52.829 "subsystems": [ 00:30:52.829 { 00:30:52.829 "subsystem": "keyring", 00:30:52.829 "config": [ 00:30:52.829 { 00:30:52.829 "method": "keyring_file_add_key", 00:30:52.829 "params": { 00:30:52.829 "name": "key0", 00:30:52.829 "path": "/tmp/tmp.HMNnFdUcHZ" 00:30:52.829 } 00:30:52.829 }, 00:30:52.829 { 00:30:52.829 "method": "keyring_file_add_key", 00:30:52.829 "params": { 00:30:52.829 "name": "key1", 00:30:52.829 "path": "/tmp/tmp.naj66PxE25" 00:30:52.829 } 00:30:52.829 } 00:30:52.829 ] 00:30:52.829 }, 00:30:52.829 { 00:30:52.829 "subsystem": "iobuf", 00:30:52.829 "config": [ 00:30:52.829 { 00:30:52.829 "method": "iobuf_set_options", 00:30:52.829 "params": { 00:30:52.829 "small_pool_count": 8192, 00:30:52.829 "large_pool_count": 1024, 00:30:52.829 "small_bufsize": 8192, 00:30:52.829 "large_bufsize": 135168 00:30:52.829 } 00:30:52.829 } 00:30:52.829 ] 00:30:52.829 }, 00:30:52.829 { 00:30:52.829 "subsystem": "sock", 00:30:52.829 "config": [ 00:30:52.829 { 00:30:52.829 "method": "sock_set_default_impl", 00:30:52.829 "params": { 00:30:52.829 "impl_name": "posix" 00:30:52.829 } 00:30:52.829 }, 00:30:52.829 { 00:30:52.829 "method": "sock_impl_set_options", 00:30:52.829 "params": { 00:30:52.829 "impl_name": "ssl", 00:30:52.829 "recv_buf_size": 4096, 00:30:52.829 "send_buf_size": 4096, 00:30:52.829 "enable_recv_pipe": true, 00:30:52.829 "enable_quickack": false, 00:30:52.829 "enable_placement_id": 0, 00:30:52.829 "enable_zerocopy_send_server": true, 00:30:52.829 "enable_zerocopy_send_client": false, 00:30:52.829 "zerocopy_threshold": 0, 00:30:52.829 "tls_version": 0, 00:30:52.829 "enable_ktls": false 00:30:52.829 } 00:30:52.829 }, 00:30:52.829 { 00:30:52.830 "method": "sock_impl_set_options", 00:30:52.830 "params": { 00:30:52.830 "impl_name": "posix", 00:30:52.830 "recv_buf_size": 2097152, 00:30:52.830 "send_buf_size": 2097152, 00:30:52.830 "enable_recv_pipe": true, 00:30:52.830 "enable_quickack": false, 00:30:52.830 "enable_placement_id": 0, 00:30:52.830 "enable_zerocopy_send_server": true, 00:30:52.830 "enable_zerocopy_send_client": false, 00:30:52.830 "zerocopy_threshold": 0, 00:30:52.830 "tls_version": 0, 00:30:52.830 "enable_ktls": false 00:30:52.830 } 00:30:52.830 } 00:30:52.830 ] 00:30:52.830 }, 00:30:52.830 { 00:30:52.830 "subsystem": "vmd", 00:30:52.830 "config": [] 00:30:52.830 }, 00:30:52.830 { 00:30:52.830 "subsystem": "accel", 00:30:52.830 "config": [ 00:30:52.830 { 00:30:52.830 "method": "accel_set_options", 00:30:52.830 "params": { 00:30:52.830 "small_cache_size": 128, 00:30:52.830 "large_cache_size": 16, 00:30:52.830 "task_count": 2048, 00:30:52.830 "sequence_count": 2048, 00:30:52.830 "buf_count": 2048 00:30:52.830 } 00:30:52.830 } 00:30:52.830 ] 00:30:52.830 }, 00:30:52.830 { 00:30:52.830 "subsystem": "bdev", 00:30:52.830 "config": [ 00:30:52.830 { 00:30:52.830 "method": "bdev_set_options", 00:30:52.830 "params": { 00:30:52.830 "bdev_io_pool_size": 65535, 00:30:52.830 "bdev_io_cache_size": 256, 00:30:52.830 "bdev_auto_examine": true, 00:30:52.830 "iobuf_small_cache_size": 128, 00:30:52.830 "iobuf_large_cache_size": 16 00:30:52.830 } 00:30:52.830 }, 00:30:52.830 { 00:30:52.830 "method": "bdev_raid_set_options", 00:30:52.830 "params": { 00:30:52.830 "process_window_size_kb": 1024 00:30:52.830 } 00:30:52.830 }, 00:30:52.830 { 00:30:52.830 "method": "bdev_iscsi_set_options", 00:30:52.830 "params": { 00:30:52.830 "timeout_sec": 30 00:30:52.830 } 00:30:52.830 }, 00:30:52.830 { 00:30:52.830 "method": "bdev_nvme_set_options", 00:30:52.830 "params": { 00:30:52.830 "action_on_timeout": "none", 00:30:52.830 "timeout_us": 0, 00:30:52.830 "timeout_admin_us": 0, 00:30:52.830 "keep_alive_timeout_ms": 10000, 00:30:52.830 "arbitration_burst": 0, 00:30:52.830 "low_priority_weight": 0, 00:30:52.830 "medium_priority_weight": 0, 00:30:52.830 "high_priority_weight": 0, 00:30:52.830 "nvme_adminq_poll_period_us": 10000, 00:30:52.830 "nvme_ioq_poll_period_us": 0, 00:30:52.830 "io_queue_requests": 512, 00:30:52.830 "delay_cmd_submit": true, 00:30:52.830 "transport_retry_count": 4, 00:30:52.830 "bdev_retry_count": 3, 00:30:52.830 "transport_ack_timeout": 0, 00:30:52.830 "ctrlr_loss_timeout_sec": 0, 00:30:52.830 "reconnect_delay_sec": 0, 00:30:52.830 "fast_io_fail_timeout_sec": 0, 00:30:52.830 "disable_auto_failback": false, 00:30:52.830 "generate_uuids": false, 00:30:52.830 "transport_tos": 0, 00:30:52.830 "nvme_error_stat": false, 00:30:52.830 "rdma_srq_size": 0, 00:30:52.830 "io_path_stat": false, 00:30:52.830 "allow_accel_sequence": false, 00:30:52.830 "rdma_max_cq_size": 0, 00:30:52.830 "rdma_cm_event_timeout_ms": 0, 00:30:52.830 "dhchap_digests": [ 00:30:52.830 "sha256", 00:30:52.830 "sha384", 00:30:52.830 "sha512" 00:30:52.830 ], 00:30:52.830 "dhchap_dhgroups": [ 00:30:52.830 "null", 00:30:52.830 "ffdhe2048", 00:30:52.830 "ffdhe3072", 00:30:52.830 "ffdhe4096", 00:30:52.830 "ffdhe6144", 00:30:52.830 "ffdhe8192" 00:30:52.830 ] 00:30:52.830 } 00:30:52.830 }, 00:30:52.830 { 00:30:52.830 "method": "bdev_nvme_attach_controller", 00:30:52.830 "params": { 00:30:52.830 "name": "nvme0", 00:30:52.830 "trtype": "TCP", 00:30:52.830 "adrfam": "IPv4", 00:30:52.830 "traddr": "127.0.0.1", 00:30:52.830 "trsvcid": "4420", 00:30:52.830 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:52.830 "prchk_reftag": false, 00:30:52.830 "prchk_guard": false, 00:30:52.830 "ctrlr_loss_timeout_sec": 0, 00:30:52.830 "reconnect_delay_sec": 0, 00:30:52.830 "fast_io_fail_timeout_sec": 0, 00:30:52.830 "psk": "key0", 00:30:52.830 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:52.830 "hdgst": false, 00:30:52.830 "ddgst": false 00:30:52.830 } 00:30:52.830 }, 00:30:52.830 { 00:30:52.830 "method": "bdev_nvme_set_hotplug", 00:30:52.830 "params": { 00:30:52.830 "period_us": 100000, 00:30:52.830 "enable": false 00:30:52.830 } 00:30:52.830 }, 00:30:52.830 { 00:30:52.830 "method": "bdev_wait_for_examine" 00:30:52.830 } 00:30:52.830 ] 00:30:52.830 }, 00:30:52.830 { 00:30:52.830 "subsystem": "nbd", 00:30:52.830 "config": [] 00:30:52.830 } 00:30:52.830 ] 00:30:52.830 }' 00:30:52.830 18:45:38 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:52.830 18:45:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:52.830 [2024-07-15 18:45:38.328187] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:30:52.830 [2024-07-15 18:45:38.328237] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4119239 ] 00:30:52.830 EAL: No free 2048 kB hugepages reported on node 1 00:30:53.089 [2024-07-15 18:45:38.394705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.089 [2024-07-15 18:45:38.473227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:53.089 [2024-07-15 18:45:38.630488] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:53.655 18:45:39 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:53.655 18:45:39 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:53.655 18:45:39 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:30:53.655 18:45:39 keyring_file -- keyring/file.sh@120 -- # jq length 00:30:53.655 18:45:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:53.913 18:45:39 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:30:53.913 18:45:39 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:30:53.913 18:45:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:53.913 18:45:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:53.913 18:45:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:53.913 18:45:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:53.913 18:45:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:54.171 18:45:39 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:30:54.171 18:45:39 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:30:54.171 18:45:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:54.171 18:45:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:54.171 18:45:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:54.171 18:45:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:54.171 18:45:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:54.171 18:45:39 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:30:54.171 18:45:39 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:30:54.171 18:45:39 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:30:54.171 18:45:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:30:54.429 18:45:39 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:30:54.429 18:45:39 keyring_file -- keyring/file.sh@1 -- # cleanup 00:30:54.429 18:45:39 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.HMNnFdUcHZ /tmp/tmp.naj66PxE25 00:30:54.429 18:45:39 keyring_file -- keyring/file.sh@20 -- # killprocess 4119239 00:30:54.429 18:45:39 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 4119239 ']' 00:30:54.429 18:45:39 keyring_file -- common/autotest_common.sh@952 -- # kill -0 4119239 00:30:54.429 18:45:39 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:54.429 18:45:39 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:54.429 18:45:39 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4119239 00:30:54.429 18:45:39 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:54.429 18:45:39 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:54.429 18:45:39 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4119239' 00:30:54.429 killing process with pid 4119239 00:30:54.429 18:45:39 keyring_file -- common/autotest_common.sh@967 -- # kill 4119239 00:30:54.429 Received shutdown signal, test time was about 1.000000 seconds 00:30:54.429 00:30:54.429 Latency(us) 00:30:54.429 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:54.429 =================================================================================================================== 00:30:54.429 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:54.429 18:45:39 keyring_file -- common/autotest_common.sh@972 -- # wait 4119239 00:30:54.688 18:45:40 keyring_file -- keyring/file.sh@21 -- # killprocess 4117619 00:30:54.688 18:45:40 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 4117619 ']' 00:30:54.688 18:45:40 keyring_file -- common/autotest_common.sh@952 -- # kill -0 4117619 00:30:54.688 18:45:40 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:54.688 18:45:40 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:54.688 18:45:40 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4117619 00:30:54.688 18:45:40 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:54.688 18:45:40 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:54.688 18:45:40 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4117619' 00:30:54.688 killing process with pid 4117619 00:30:54.688 18:45:40 keyring_file -- common/autotest_common.sh@967 -- # kill 4117619 00:30:54.688 [2024-07-15 18:45:40.115412] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:54.688 18:45:40 keyring_file -- common/autotest_common.sh@972 -- # wait 4117619 00:30:54.946 00:30:54.946 real 0m11.982s 00:30:54.946 user 0m28.772s 00:30:54.946 sys 0m2.704s 00:30:54.946 18:45:40 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:54.946 18:45:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:54.946 ************************************ 00:30:54.946 END TEST keyring_file 00:30:54.946 ************************************ 00:30:54.946 18:45:40 -- common/autotest_common.sh@1142 -- # return 0 00:30:54.946 18:45:40 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:30:54.946 18:45:40 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:30:54.946 18:45:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:54.946 18:45:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:54.946 18:45:40 -- common/autotest_common.sh@10 -- # set +x 00:30:54.946 ************************************ 00:30:54.946 START TEST keyring_linux 00:30:54.946 ************************************ 00:30:54.946 18:45:40 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:30:55.205 * Looking for test storage... 00:30:55.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:30:55.205 18:45:40 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:30:55.205 18:45:40 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:55.205 18:45:40 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:55.205 18:45:40 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:55.205 18:45:40 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:55.205 18:45:40 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.205 18:45:40 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.205 18:45:40 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.205 18:45:40 keyring_linux -- paths/export.sh@5 -- # export PATH 00:30:55.205 18:45:40 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:55.205 18:45:40 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:55.205 18:45:40 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:55.205 18:45:40 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:55.205 18:45:40 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:30:55.205 18:45:40 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:30:55.205 18:45:40 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:30:55.205 18:45:40 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:30:55.205 18:45:40 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:30:55.205 18:45:40 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:30:55.205 18:45:40 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:55.205 18:45:40 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:30:55.205 18:45:40 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:30:55.205 18:45:40 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@705 -- # python - 00:30:55.205 18:45:40 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:30:55.205 18:45:40 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:30:55.205 /tmp/:spdk-test:key0 00:30:55.205 18:45:40 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:30:55.205 18:45:40 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:30:55.205 18:45:40 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:30:55.205 18:45:40 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:55.205 18:45:40 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:30:55.205 18:45:40 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:30:55.205 18:45:40 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:30:55.205 18:45:40 keyring_linux -- nvmf/common.sh@705 -- # python - 00:30:55.205 18:45:40 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:30:55.205 18:45:40 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:30:55.205 /tmp/:spdk-test:key1 00:30:55.205 18:45:40 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:30:55.205 18:45:40 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=4119696 00:30:55.205 18:45:40 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 4119696 00:30:55.205 18:45:40 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 4119696 ']' 00:30:55.205 18:45:40 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:55.205 18:45:40 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:55.205 18:45:40 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:55.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:55.205 18:45:40 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:55.205 18:45:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:55.205 [2024-07-15 18:45:40.723726] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:30:55.205 [2024-07-15 18:45:40.723775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4119696 ] 00:30:55.205 EAL: No free 2048 kB hugepages reported on node 1 00:30:55.463 [2024-07-15 18:45:40.790443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.463 [2024-07-15 18:45:40.868848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.029 18:45:41 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:56.029 18:45:41 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:30:56.029 18:45:41 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:30:56.029 18:45:41 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.029 18:45:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:56.029 [2024-07-15 18:45:41.539501] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:56.029 null0 00:30:56.029 [2024-07-15 18:45:41.571548] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:56.029 [2024-07-15 18:45:41.571874] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:56.288 18:45:41 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.288 18:45:41 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:30:56.288 222288709 00:30:56.288 18:45:41 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:30:56.288 402840071 00:30:56.288 18:45:41 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=4119924 00:30:56.288 18:45:41 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 4119924 /var/tmp/bperf.sock 00:30:56.288 18:45:41 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:30:56.288 18:45:41 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 4119924 ']' 00:30:56.288 18:45:41 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:56.288 18:45:41 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:56.288 18:45:41 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:56.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:56.288 18:45:41 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:56.288 18:45:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:56.288 [2024-07-15 18:45:41.652056] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:30:56.288 [2024-07-15 18:45:41.652098] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4119924 ] 00:30:56.288 EAL: No free 2048 kB hugepages reported on node 1 00:30:56.288 [2024-07-15 18:45:41.719795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.288 [2024-07-15 18:45:41.797907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:57.224 18:45:42 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:57.224 18:45:42 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:30:57.224 18:45:42 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:30:57.224 18:45:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:30:57.224 18:45:42 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:30:57.224 18:45:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:57.482 18:45:42 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:30:57.482 18:45:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:30:57.482 [2024-07-15 18:45:42.976573] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:57.740 nvme0n1 00:30:57.740 18:45:43 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:30:57.740 18:45:43 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:30:57.740 18:45:43 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:30:57.740 18:45:43 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:30:57.740 18:45:43 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:30:57.740 18:45:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:57.740 18:45:43 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:30:57.740 18:45:43 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:30:57.740 18:45:43 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:30:57.740 18:45:43 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:30:57.740 18:45:43 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:57.740 18:45:43 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:30:57.740 18:45:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:57.997 18:45:43 keyring_linux -- keyring/linux.sh@25 -- # sn=222288709 00:30:57.997 18:45:43 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:30:57.997 18:45:43 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:30:57.997 18:45:43 keyring_linux -- keyring/linux.sh@26 -- # [[ 222288709 == \2\2\2\2\8\8\7\0\9 ]] 00:30:57.997 18:45:43 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 222288709 00:30:57.997 18:45:43 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:30:57.997 18:45:43 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:57.997 Running I/O for 1 seconds... 00:30:59.432 00:30:59.432 Latency(us) 00:30:59.432 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:59.432 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:59.432 nvme0n1 : 1.01 21515.16 84.04 0.00 0.00 5927.20 4743.56 9736.78 00:30:59.432 =================================================================================================================== 00:30:59.432 Total : 21515.16 84.04 0.00 0.00 5927.20 4743.56 9736.78 00:30:59.432 0 00:30:59.432 18:45:44 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:59.432 18:45:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:59.432 18:45:44 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:30:59.432 18:45:44 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:30:59.432 18:45:44 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:30:59.432 18:45:44 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:30:59.432 18:45:44 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:30:59.432 18:45:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:59.432 18:45:44 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:30:59.432 18:45:44 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:30:59.432 18:45:44 keyring_linux -- keyring/linux.sh@23 -- # return 00:30:59.432 18:45:44 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:59.432 18:45:44 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:30:59.432 18:45:44 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:59.432 18:45:44 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:59.432 18:45:44 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:59.432 18:45:44 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:59.432 18:45:44 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:59.432 18:45:44 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:59.432 18:45:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:59.691 [2024-07-15 18:45:45.061248] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:59.691 [2024-07-15 18:45:45.061861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe80fd0 (107): Transport endpoint is not connected 00:30:59.691 [2024-07-15 18:45:45.062856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe80fd0 (9): Bad file descriptor 00:30:59.691 [2024-07-15 18:45:45.063857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:59.691 [2024-07-15 18:45:45.063867] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:59.691 [2024-07-15 18:45:45.063873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:59.691 request: 00:30:59.691 { 00:30:59.691 "name": "nvme0", 00:30:59.691 "trtype": "tcp", 00:30:59.691 "traddr": "127.0.0.1", 00:30:59.691 "adrfam": "ipv4", 00:30:59.691 "trsvcid": "4420", 00:30:59.691 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:59.691 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:59.691 "prchk_reftag": false, 00:30:59.691 "prchk_guard": false, 00:30:59.691 "hdgst": false, 00:30:59.691 "ddgst": false, 00:30:59.691 "psk": ":spdk-test:key1", 00:30:59.691 "method": "bdev_nvme_attach_controller", 00:30:59.691 "req_id": 1 00:30:59.691 } 00:30:59.691 Got JSON-RPC error response 00:30:59.691 response: 00:30:59.691 { 00:30:59.691 "code": -5, 00:30:59.691 "message": "Input/output error" 00:30:59.691 } 00:30:59.691 18:45:45 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:30:59.691 18:45:45 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:59.691 18:45:45 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:59.691 18:45:45 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:59.691 18:45:45 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:30:59.691 18:45:45 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:30:59.691 18:45:45 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:30:59.691 18:45:45 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:30:59.691 18:45:45 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:30:59.691 18:45:45 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:30:59.691 18:45:45 keyring_linux -- keyring/linux.sh@33 -- # sn=222288709 00:30:59.691 18:45:45 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 222288709 00:30:59.691 1 links removed 00:30:59.691 18:45:45 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:30:59.691 18:45:45 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:30:59.691 18:45:45 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:30:59.691 18:45:45 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:30:59.691 18:45:45 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:30:59.691 18:45:45 keyring_linux -- keyring/linux.sh@33 -- # sn=402840071 00:30:59.691 18:45:45 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 402840071 00:30:59.691 1 links removed 00:30:59.691 18:45:45 keyring_linux -- keyring/linux.sh@41 -- # killprocess 4119924 00:30:59.691 18:45:45 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 4119924 ']' 00:30:59.691 18:45:45 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 4119924 00:30:59.691 18:45:45 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:30:59.691 18:45:45 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:59.691 18:45:45 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4119924 00:30:59.691 18:45:45 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:59.691 18:45:45 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:59.691 18:45:45 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4119924' 00:30:59.691 killing process with pid 4119924 00:30:59.691 18:45:45 keyring_linux -- common/autotest_common.sh@967 -- # kill 4119924 00:30:59.691 Received shutdown signal, test time was about 1.000000 seconds 00:30:59.691 00:30:59.691 Latency(us) 00:30:59.691 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:59.691 =================================================================================================================== 00:30:59.691 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:59.691 18:45:45 keyring_linux -- common/autotest_common.sh@972 -- # wait 4119924 00:30:59.949 18:45:45 keyring_linux -- keyring/linux.sh@42 -- # killprocess 4119696 00:30:59.949 18:45:45 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 4119696 ']' 00:30:59.949 18:45:45 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 4119696 00:30:59.949 18:45:45 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:30:59.949 18:45:45 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:59.949 18:45:45 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4119696 00:30:59.949 18:45:45 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:59.949 18:45:45 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:59.949 18:45:45 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4119696' 00:30:59.949 killing process with pid 4119696 00:30:59.949 18:45:45 keyring_linux -- common/autotest_common.sh@967 -- # kill 4119696 00:30:59.949 18:45:45 keyring_linux -- common/autotest_common.sh@972 -- # wait 4119696 00:31:00.207 00:31:00.207 real 0m5.177s 00:31:00.207 user 0m9.489s 00:31:00.207 sys 0m1.469s 00:31:00.207 18:45:45 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:00.207 18:45:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:00.207 ************************************ 00:31:00.207 END TEST keyring_linux 00:31:00.207 ************************************ 00:31:00.207 18:45:45 -- common/autotest_common.sh@1142 -- # return 0 00:31:00.207 18:45:45 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:31:00.207 18:45:45 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:31:00.207 18:45:45 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:31:00.207 18:45:45 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:31:00.207 18:45:45 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:31:00.207 18:45:45 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:31:00.207 18:45:45 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:31:00.207 18:45:45 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:31:00.207 18:45:45 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:31:00.207 18:45:45 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:31:00.207 18:45:45 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:31:00.207 18:45:45 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:31:00.207 18:45:45 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:31:00.207 18:45:45 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:31:00.207 18:45:45 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:31:00.207 18:45:45 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:31:00.207 18:45:45 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:31:00.207 18:45:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:00.207 18:45:45 -- common/autotest_common.sh@10 -- # set +x 00:31:00.207 18:45:45 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:31:00.207 18:45:45 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:31:00.207 18:45:45 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:31:00.207 18:45:45 -- common/autotest_common.sh@10 -- # set +x 00:31:05.501 INFO: APP EXITING 00:31:05.501 INFO: killing all VMs 00:31:05.501 INFO: killing vhost app 00:31:05.501 INFO: EXIT DONE 00:31:08.033 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:31:08.033 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:31:08.033 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:31:08.033 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:31:08.033 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:31:08.033 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:31:08.033 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:31:08.033 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:31:08.033 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:31:08.033 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:31:08.033 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:31:08.033 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:31:08.033 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:31:08.033 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:31:08.033 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:31:08.033 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:31:08.033 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:31:11.324 Cleaning 00:31:11.324 Removing: /var/run/dpdk/spdk0/config 00:31:11.324 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:11.324 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:11.324 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:11.324 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:11.324 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:31:11.324 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:31:11.324 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:31:11.324 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:31:11.324 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:11.324 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:11.324 Removing: /var/run/dpdk/spdk1/config 00:31:11.324 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:31:11.324 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:31:11.324 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:31:11.324 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:31:11.324 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:31:11.324 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:31:11.324 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:31:11.324 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:31:11.324 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:31:11.324 Removing: /var/run/dpdk/spdk1/hugepage_info 00:31:11.324 Removing: /var/run/dpdk/spdk1/mp_socket 00:31:11.324 Removing: /var/run/dpdk/spdk2/config 00:31:11.324 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:31:11.324 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:31:11.324 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:31:11.324 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:31:11.324 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:31:11.324 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:31:11.324 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:31:11.324 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:31:11.324 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:31:11.324 Removing: /var/run/dpdk/spdk2/hugepage_info 00:31:11.324 Removing: /var/run/dpdk/spdk3/config 00:31:11.324 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:31:11.324 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:31:11.324 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:31:11.324 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:31:11.324 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:31:11.324 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:31:11.324 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:31:11.324 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:31:11.324 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:31:11.324 Removing: /var/run/dpdk/spdk3/hugepage_info 00:31:11.324 Removing: /var/run/dpdk/spdk4/config 00:31:11.324 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:31:11.324 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:31:11.324 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:31:11.324 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:31:11.324 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:31:11.324 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:31:11.324 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:31:11.324 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:31:11.324 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:31:11.324 Removing: /var/run/dpdk/spdk4/hugepage_info 00:31:11.324 Removing: /dev/shm/bdev_svc_trace.1 00:31:11.324 Removing: /dev/shm/nvmf_trace.0 00:31:11.324 Removing: /dev/shm/spdk_tgt_trace.pid3731852 00:31:11.324 Removing: /var/run/dpdk/spdk0 00:31:11.324 Removing: /var/run/dpdk/spdk1 00:31:11.324 Removing: /var/run/dpdk/spdk2 00:31:11.324 Removing: /var/run/dpdk/spdk3 00:31:11.324 Removing: /var/run/dpdk/spdk4 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3729484 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3730562 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3731852 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3732481 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3733433 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3733674 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3734643 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3734684 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3734995 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3736729 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3738002 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3738287 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3738574 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3738967 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3739313 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3739534 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3739721 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3739998 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3740910 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3743896 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3744160 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3744418 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3744648 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3744925 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3745154 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3745528 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3745657 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3745917 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3746166 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3746366 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3746440 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3746989 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3747244 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3747529 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3747802 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3747830 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3748005 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3748275 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3748546 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3748821 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3749089 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3749349 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3749607 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3749853 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3750101 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3750399 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3750730 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3750985 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3751232 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3751479 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3751733 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3752248 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3752617 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3752868 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3753122 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3753373 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3753625 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3753709 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3754162 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3757865 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3801323 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3806033 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3816115 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3821433 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3825402 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3826094 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3832137 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3838331 00:31:11.324 Removing: /var/run/dpdk/spdk_pid3838333 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3839161 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3839950 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3840864 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3841440 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3841546 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3841782 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3841795 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3841803 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3842715 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3843630 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3844544 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3845126 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3845138 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3845424 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3847003 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3848198 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3856529 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3856785 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3861036 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3866899 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3869496 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3879924 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3888817 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3890769 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3891788 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3908676 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3912452 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3937623 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3942118 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3943719 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3945571 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3945806 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3946038 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3946259 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3946797 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3948634 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3949628 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3950130 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3952363 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3952953 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3953680 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3957901 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3967931 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3972019 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3978425 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3979725 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3981042 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3985444 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3989590 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3997132 00:31:11.325 Removing: /var/run/dpdk/spdk_pid3997170 00:31:11.325 Removing: /var/run/dpdk/spdk_pid4001668 00:31:11.325 Removing: /var/run/dpdk/spdk_pid4001892 00:31:11.325 Removing: /var/run/dpdk/spdk_pid4002121 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4002579 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4002586 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4007074 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4007636 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4011994 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4014724 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4020126 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4026189 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4034747 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4041963 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4041968 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4060245 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4060944 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4061627 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4062175 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4063092 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4063787 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4064367 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4064968 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4069343 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4069585 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4076032 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4076307 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4078531 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4086492 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4086503 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4091741 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4093710 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4095695 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4096742 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4098766 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4099992 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4108733 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4109191 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4109839 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4112393 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4113114 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4113629 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4117619 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4117729 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4119239 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4119696 00:31:11.585 Removing: /var/run/dpdk/spdk_pid4119924 00:31:11.585 Clean 00:31:11.585 18:45:57 -- common/autotest_common.sh@1451 -- # return 0 00:31:11.585 18:45:57 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:31:11.585 18:45:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:11.585 18:45:57 -- common/autotest_common.sh@10 -- # set +x 00:31:11.844 18:45:57 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:31:11.844 18:45:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:11.844 18:45:57 -- common/autotest_common.sh@10 -- # set +x 00:31:11.844 18:45:57 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:31:11.844 18:45:57 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:31:11.844 18:45:57 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:31:11.844 18:45:57 -- spdk/autotest.sh@391 -- # hash lcov 00:31:11.844 18:45:57 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:31:11.844 18:45:57 -- spdk/autotest.sh@393 -- # hostname 00:31:11.844 18:45:57 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:31:11.844 geninfo: WARNING: invalid characters removed from testname! 00:31:33.797 18:46:16 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:33.797 18:46:19 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:35.701 18:46:21 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:37.606 18:46:22 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:39.509 18:46:24 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:40.887 18:46:26 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:42.808 18:46:28 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:42.808 18:46:28 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:42.808 18:46:28 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:42.808 18:46:28 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:42.808 18:46:28 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:42.808 18:46:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.808 18:46:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.808 18:46:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.808 18:46:28 -- paths/export.sh@5 -- $ export PATH 00:31:42.808 18:46:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.808 18:46:28 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:31:42.808 18:46:28 -- common/autobuild_common.sh@444 -- $ date +%s 00:31:42.808 18:46:28 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721061988.XXXXXX 00:31:42.808 18:46:28 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721061988.DhM8AW 00:31:42.808 18:46:28 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:31:42.808 18:46:28 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:31:42.808 18:46:28 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:31:42.808 18:46:28 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:31:42.808 18:46:28 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:31:42.808 18:46:28 -- common/autobuild_common.sh@460 -- $ get_config_params 00:31:42.808 18:46:28 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:31:42.808 18:46:28 -- common/autotest_common.sh@10 -- $ set +x 00:31:42.808 18:46:28 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:31:42.808 18:46:28 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:31:42.808 18:46:28 -- pm/common@17 -- $ local monitor 00:31:42.808 18:46:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:42.808 18:46:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:42.808 18:46:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:42.808 18:46:28 -- pm/common@21 -- $ date +%s 00:31:42.808 18:46:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:42.808 18:46:28 -- pm/common@21 -- $ date +%s 00:31:42.808 18:46:28 -- pm/common@25 -- $ sleep 1 00:31:42.808 18:46:28 -- pm/common@21 -- $ date +%s 00:31:42.808 18:46:28 -- pm/common@21 -- $ date +%s 00:31:42.808 18:46:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721061988 00:31:42.808 18:46:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721061988 00:31:42.808 18:46:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721061988 00:31:42.808 18:46:28 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721061988 00:31:42.808 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721061988_collect-vmstat.pm.log 00:31:42.808 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721061988_collect-cpu-load.pm.log 00:31:42.808 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721061988_collect-cpu-temp.pm.log 00:31:42.808 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721061988_collect-bmc-pm.bmc.pm.log 00:31:43.745 18:46:29 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:31:43.745 18:46:29 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:31:43.745 18:46:29 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:43.745 18:46:29 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:43.745 18:46:29 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:43.745 18:46:29 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:43.745 18:46:29 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:43.745 18:46:29 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:43.745 18:46:29 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:31:43.745 18:46:29 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:43.745 18:46:29 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:31:43.745 18:46:29 -- pm/common@29 -- $ signal_monitor_resources TERM 00:31:43.745 18:46:29 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:31:43.745 18:46:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:43.745 18:46:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:31:43.745 18:46:29 -- pm/common@44 -- $ pid=4129959 00:31:43.745 18:46:29 -- pm/common@50 -- $ kill -TERM 4129959 00:31:43.745 18:46:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:43.745 18:46:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:31:43.745 18:46:29 -- pm/common@44 -- $ pid=4129960 00:31:43.745 18:46:29 -- pm/common@50 -- $ kill -TERM 4129960 00:31:43.745 18:46:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:43.745 18:46:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:31:43.745 18:46:29 -- pm/common@44 -- $ pid=4129962 00:31:43.745 18:46:29 -- pm/common@50 -- $ kill -TERM 4129962 00:31:43.745 18:46:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:43.745 18:46:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:31:43.745 18:46:29 -- pm/common@44 -- $ pid=4129987 00:31:43.745 18:46:29 -- pm/common@50 -- $ sudo -E kill -TERM 4129987 00:31:43.745 + [[ -n 3624715 ]] 00:31:43.745 + sudo kill 3624715 00:31:43.754 [Pipeline] } 00:31:43.772 [Pipeline] // stage 00:31:43.777 [Pipeline] } 00:31:43.795 [Pipeline] // timeout 00:31:43.799 [Pipeline] } 00:31:43.813 [Pipeline] // catchError 00:31:43.817 [Pipeline] } 00:31:43.829 [Pipeline] // wrap 00:31:43.835 [Pipeline] } 00:31:43.845 [Pipeline] // catchError 00:31:43.851 [Pipeline] stage 00:31:43.853 [Pipeline] { (Epilogue) 00:31:43.863 [Pipeline] catchError 00:31:43.864 [Pipeline] { 00:31:43.875 [Pipeline] echo 00:31:43.876 Cleanup processes 00:31:43.881 [Pipeline] sh 00:31:44.161 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:44.161 4130076 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:31:44.161 4130357 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:44.177 [Pipeline] sh 00:31:44.508 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:44.508 ++ grep -v 'sudo pgrep' 00:31:44.508 ++ awk '{print $1}' 00:31:44.508 + sudo kill -9 4130076 00:31:44.557 [Pipeline] sh 00:31:44.835 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:54.823 [Pipeline] sh 00:31:55.106 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:55.106 Artifacts sizes are good 00:31:55.122 [Pipeline] archiveArtifacts 00:31:55.129 Archiving artifacts 00:31:55.291 [Pipeline] sh 00:31:55.576 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:31:55.592 [Pipeline] cleanWs 00:31:55.602 [WS-CLEANUP] Deleting project workspace... 00:31:55.602 [WS-CLEANUP] Deferred wipeout is used... 00:31:55.609 [WS-CLEANUP] done 00:31:55.611 [Pipeline] } 00:31:55.633 [Pipeline] // catchError 00:31:55.646 [Pipeline] sh 00:31:55.928 + logger -p user.info -t JENKINS-CI 00:31:55.938 [Pipeline] } 00:31:55.955 [Pipeline] // stage 00:31:55.962 [Pipeline] } 00:31:55.980 [Pipeline] // node 00:31:55.986 [Pipeline] End of Pipeline 00:31:56.022 Finished: SUCCESS